--- base_model: mini1013/master_domain library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '[당일출고] 비플레인 녹두 약산성 클렌징폼 160ml 옵션없음 제이에이치컴퍼니' - text: 아임프롬 피그 스크럽 마스크, 120g, 1개 120g × 1개 120g x 1개 익사이팅 - text: 코스트코 오스트레일리안 보태니컬비누 버라이어티팩 200g x 8개 목욕비누 1. 고트밀크&레몬그라스 외 3종 8개 굿바이즈 - text: 바로출고 로자그라프 망고클렌징젤 500ml 열감많은 촉촉 깔끔 클렌징젤 공병 [추가로 필요한 경우] 클렌징젤 사각공병 후니맘 - text: 바이오뷰텍 바이오옵틱스 아이크린 리드 클리너 30매 1021542 옵션없음 페이즈 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.6902857142857143 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 12 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5.0 | | | 9.0 | | | 3.0 | | | 10.0 | | | 2.0 | | | 0.0 | | | 11.0 | | | 8.0 | | | 6.0 | | | 7.0 | | | 4.0 | | | 1.0 | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.6903 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bt9_test") # Run inference preds = model("[당일출고] 비플레인 녹두 약산성 클렌징폼 160ml 옵션없음 제이에이치컴퍼니") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 9.0359 | 19 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 20 | | 1.0 | 27 | | 2.0 | 20 | | 3.0 | 27 | | 4.0 | 15 | | 5.0 | 20 | | 6.0 | 20 | | 7.0 | 18 | | 8.0 | 20 | | 9.0 | 22 | | 10.0 | 17 | | 11.0 | 25 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (50, 50) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 60 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0333 | 1 | 0.4927 | - | | 1.6667 | 50 | 0.4009 | - | | 3.3333 | 100 | 0.1238 | - | | 5.0 | 150 | 0.0523 | - | | 6.6667 | 200 | 0.0156 | - | | 8.3333 | 250 | 0.0022 | - | | 10.0 | 300 | 0.0003 | - | | 11.6667 | 350 | 0.0002 | - | | 13.3333 | 400 | 0.0002 | - | | 15.0 | 450 | 0.0001 | - | | 16.6667 | 500 | 0.0001 | - | | 18.3333 | 550 | 0.0001 | - | | 20.0 | 600 | 0.0001 | - | | 21.6667 | 650 | 0.0001 | - | | 23.3333 | 700 | 0.0001 | - | | 25.0 | 750 | 0.0001 | - | | 26.6667 | 800 | 0.0001 | - | | 28.3333 | 850 | 0.0001 | - | | 30.0 | 900 | 0.0001 | - | | 31.6667 | 950 | 0.0001 | - | | 33.3333 | 1000 | 0.0001 | - | | 35.0 | 1050 | 0.0001 | - | | 36.6667 | 1100 | 0.0001 | - | | 38.3333 | 1150 | 0.0001 | - | | 40.0 | 1200 | 0.0001 | - | | 41.6667 | 1250 | 0.0001 | - | | 43.3333 | 1300 | 0.0001 | - | | 45.0 | 1350 | 0.0 | - | | 46.6667 | 1400 | 0.0 | - | | 48.3333 | 1450 | 0.0001 | - | | 50.0 | 1500 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```