modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-23 18:27:52
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-23 18:25:26
card
stringlengths
11
1.01M
jfarray/Model_bert-base-multilingual-uncased_100_Epochs
jfarray
2022-02-14T20:23:54Z
8
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 110, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_bert-base-multilingual-uncased_50_Epochs
jfarray
2022-02-14T19:44:38Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 50, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 55, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/magicrealismbot
huggingtweets
2022-02-14T18:15:59Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/668872745329885184/67TNOs2A_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Magic Realism Bot</div> <div style="text-align: center; font-size: 14px;">@magicrealismbot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Magic Realism Bot. | Data | Magic Realism Bot | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 0 | | Tweets kept | 3250 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nx0qvg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magicrealismbot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/magicrealismbot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner
akshaychaudhary
2022-02-14T17:33:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-cloud2-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cloud2-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8866 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.8453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 | | No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 | | No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh
NewT5SharedHeadsSharedKeyValues
2022-02-14T16:23:08Z
6
0
transformers
[ "transformers", "t5", "text2text-generation", "t5-new-failed", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - t5-new-failed --- # Test Hf T5: -146.39734268188477 MTF T5: -72.12132263183594
NewT5SharedHeadsSharedKeyValues/t5-efficient-xl-sh
NewT5SharedHeadsSharedKeyValues
2022-02-14T16:23:01Z
8
0
transformers
[ "transformers", "t5", "text2text-generation", "t5-new-failed", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - t5-new-failed --- # Test Hf T5: -118.6875057220459 MTF T5: -76.85459899902344
NewT5SharedHeadsSharedKeyValues/t5-efficient-large-sh
NewT5SharedHeadsSharedKeyValues
2022-02-14T16:22:44Z
6
0
transformers
[ "transformers", "t5", "text2text-generation", "t5-new-failed", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - t5-new-failed --- # Test Hf T5: -110.35000801086426 MTF T5: -57.58127975463867
vblagoje/dpr-ctx_encoder-single-lfqa-wiki
vblagoje
2022-02-14T15:51:28Z
4,105
3
transformers
[ "transformers", "pytorch", "dpr", "en", "dataset:vblagoje/lfqa", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en datasets: - vblagoje/lfqa license: mit --- ## Introduction The context/passage encoder model based on [DPRContextEncoder](https://huggingface.co/docs/transformers/master/en/model_doc/dpr#transformers.DPRContextEncoder) architecture. It uses the transformer's pooler outputs as context/passage representations. See [blog post](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb) for more details. ## Training We trained vblagoje/dpr-ctx_encoder-single-lfqa-wiki using FAIR's dpr-scale in two stages. In the first stage, we used PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples - we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity. In the second stage, we created a new DPR training set using positives, negatives, and hard negatives from the Wikipedia/Faiss index created in the first stage instead of LFQA dataset answers. More precisely, for each dataset question, we queried the first stage Wikipedia Faiss index and subsequently used SBert cross-encoder to score questions/answers (passage) pairs with topk=50. The cross-encoder selected the positive passage with the highest score, while the bottom seven answers were selected for hard-negatives. Negative samples were again chosen to be answers unrelated to a given dataset question. After creating a DPR formatted training file with Wikipedia sourced positive, negative, and hard negative passages, we trained DPR-based question/passage encoders using dpr-scale. ## Performance LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-wiki and vblagoje/dpr-ctx_encoder-single-lfqa-wiki) slightly underperform 'state-of-the-art' Krishna et al. "Hurdles to Progress in Long-form Question Answering" REALM based retriever with KILT benchmark performance of 11.2 for R-precision and 19.5 for Recall@5. ## Usage ```python from transformers import DPRContextEncoder, DPRContextEncoderTokenizer tokenizer = DPRContextEncoderTokenizer.from_pretrained("vblagoje/dpr-ctx_encoder-single-lfqa-wiki") model = DPRContextEncoder.from_pretrained("vblagoje/dpr-ctx_encoder-single-lfqa-wiki") input_ids = tokenizer("Where an aircraft passes through a cloud, it can disperse the cloud in its path...", return_tensors="pt")["input_ids"] embeddings = model(input_ids).pooler_output ``` ## Author - Vladimir Blagojevic: `dovlex [at] gmail.com` [Twitter](https://twitter.com/vladblagoje) | [LinkedIn](https://www.linkedin.com/in/blagojevicvladimir/)
huggingtweets/dojacat
huggingtweets
2022-02-14T15:30:50Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dojacat/1644852645931/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1487993727918374915/aN2YUrbc_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jean-Emmanuel De La Martinière</div> <div style="text-align: center; font-size: 14px;">@dojacat</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jean-Emmanuel De La Martinière. | Data | Jean-Emmanuel De La Martinière | | --- | --- | | Tweets downloaded | 1569 | | Retweets | 124 | | Short tweets | 322 | | Tweets kept | 1123 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mc5ryte/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dojacat's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dojacat') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
groar/gpt-neo-1.3B-finetuned-escape3
groar
2022-02-14T15:17:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt-neo-1.3B-finetuned-escape3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-1.3B-finetuned-escape3 This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
leonadase/distilbert-base-uncased-finetuned-ner
leonadase
2022-02-14T13:51:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9210439378923027 - name: Recall type: recall value: 0.9356751314464705 - name: F1 type: f1 value: 0.9283018867924528 - name: Accuracy type: accuracy value: 0.983176322938345 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0611 - Precision: 0.9210 - Recall: 0.9357 - F1: 0.9283 - Accuracy: 0.9832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 | | 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 | | 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft
reach-vb
2022-02-14T13:39:07Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-1B-common_voice7-lt-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1B-common_voice7-lt-ft This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.5101 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 36 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 72 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 900 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 2.3491 | 31.24 | 500 | 3.9827 | 1.0 | | 0.0421 | 62.48 | 1000 | 2.9544 | 1.0 | | 0.0163 | 93.73 | 1500 | 2.5101 | 1.0 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3
huggingartists/bill-wurtz
huggingartists
2022-02-14T08:56:26Z
8
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/bill-wurtz", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/bill-wurtz tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/0d4b35ed37091d5f6fd59806810e14ca.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bill Wurtz</div> <a href="https://genius.com/artists/bill-wurtz"> <div style="text-align: center; font-size: 14px;">@bill-wurtz</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Bill Wurtz. Dataset is available [here](https://huggingface.co/datasets/huggingartists/bill-wurtz). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/bill-wurtz") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/27ysbe74/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bill Wurtz's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/bill-wurtz') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/bill-wurtz") model = AutoModelWithLMHead.from_pretrained("huggingartists/bill-wurtz") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
ASCCCCCCCC
2022-02-14T08:54:32Z
18
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model_index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
fastai/fastbook_06_multicat_Biwi_Kinect_Head_Pose
fastai
2022-02-14T05:21:20Z
6
2
fastai
[ "fastai", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - fastai --- # Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join our fastai community on the Hugging Face Discord! Greetings fellow fastlearner 🤝! --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
fastai/fastbook_06_multicat_PASCAL
fastai
2022-02-14T04:40:16Z
2
0
fastai
[ "fastai", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - fastai --- # Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join our fastai community on the Hugging Face Discord! Greetings fellow fastlearner 🤝! --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
stellaathena/test-med
stellaathena
2022-02-14T02:28:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 ---
jfarray/Model_bert-base-multilingual-uncased_30_Epochs
jfarray
2022-02-13T23:54:47Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 30, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 33, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_bert-base-multilingual-uncased_5_Epochs
jfarray
2022-02-13T23:03:58Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_bert-base-multilingual-uncased_1_Epochs
jfarray
2022-02-13T22:49:37Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
groar/gpt-neo-1.3B-finetuned-escape2
groar
2022-02-13T20:59:30Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: gpt-neo-1.3B-finetuned-escape2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-1.3B-finetuned-escape2 This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jfarray/Model_all-distilroberta-v1_100_Epochs
jfarray
2022-02-13T20:50:24Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 110, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_all-distilroberta-v1_50_Epochs
jfarray
2022-02-13T20:18:37Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 50, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 55, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingartists/egor-letov
huggingartists
2022-02-13T20:16:48Z
8
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/egor-letov", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/egor-letov tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/faa3dae99bf1fe365927608fd55c745a.330x330x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Егор Летов (Egor Letov)</div> <a href="https://genius.com/artists/egor-letov"> <div style="text-align: center; font-size: 14px;">@egor-letov</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Егор Летов (Egor Letov). Dataset is available [here](https://huggingface.co/datasets/huggingartists/egor-letov). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/egor-letov") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1omrcegx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Егор Летов (Egor Letov)'s lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/egor-letov') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/egor-letov") model = AutoModelWithLMHead.from_pretrained("huggingartists/egor-letov") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
jfarray/Model_all-distilroberta-v1_30_Epochs
jfarray
2022-02-13T20:00:26Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 30, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 33, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_all-distilroberta-v1_5_Epochs
jfarray
2022-02-13T19:40:19Z
10
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_all-distilroberta-v1_1_Epochs
jfarray
2022-02-13T19:34:14Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
castorini/dkrr-dpr-tqa-retriever
castorini
2022-02-13T17:57:26Z
15
0
transformers
[ "transformers", "pytorch", "bert", "arxiv:2012.04584", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
castorini/dkrr-dpr-nq-retriever
castorini
2022-02-13T17:46:38Z
22
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2012.04584", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini: ``` @misc{izacard2020distilling, title={Distilling Knowledge from Reader to Retriever for Question Answering}, author={Gautier Izacard and Edouard Grave}, year={2020}, eprint={2012.04584}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
cscottp27/distilbert-base-uncased-finetuned-emotion
cscottp27
2022-02-13T13:19:16Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9232542847906783 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2175 - Accuracy: 0.923 - F1: 0.9233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 | | 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
elezhergina/MedMTEVAL_baseline
elezhergina
2022-02-13T10:32:25Z
1
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
Found. Redirecting to https://cdn-lfs.hf.co/chatdemoiselle/MedMTEVAL_baseline/317e75b5ec487699f90946abe97fa46f8d3c74bce453b0812218ddf370fc18e7?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739268818&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTI2ODgxOH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9jaGF0ZGVtb2lzZWxsZS9NZWRNVEVWQUxfYmFzZWxpbmUvMzE3ZTc1YjVlYzQ4NzY5OWY5MDk0NmFiZTk3ZmE0NmY4ZDNjNzRiY2U0NTNiMDgxMjIxOGRkZjM3MGZjMThlNz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=VBu8dEi3dHNmhk7VtkCIGmBbGo56mIKf6gXNExXb99rjDUdpOPm9S3pyg8CZnbRle9FPnv0wNes2ohd6q-1LtcSfTH2UcgU4KjsGXBDm8lTnUQ1MqSWDFLo-tJUUXSZINKOTnVjTaPuR52pJHZ-uSKdEV9orBq9Au%7EVqOsb34UNTioaG3RyjJmTDNlxfibVy-ekyuVxkAalknb06j3Yj%7EOycx2TncYd8KkYFakbkXMQ4Of9fU2E5x3Qhwhj8JLNgboCatjwVnRCyycn1iFPkX8DLIju5Eu17ut6UWp%7Eanskuzg61dywomYfo7FYJicBF%7Ez4scIBwXFHAxMqG6F0yzg__&Key-Pair-Id=K3RPWS32NSSJCE
thyagosme/wav2vec2-base-demo-colab
thyagosme
2022-02-13T02:14:29Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4657 - Wer: 0.3422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4477 | 4.0 | 500 | 1.3352 | 0.9039 | | 0.5972 | 8.0 | 1000 | 0.4752 | 0.4509 | | 0.2224 | 12.0 | 1500 | 0.4604 | 0.4052 | | 0.1308 | 16.0 | 2000 | 0.4542 | 0.3866 | | 0.0889 | 20.0 | 2500 | 0.4730 | 0.3589 | | 0.0628 | 24.0 | 3000 | 0.4984 | 0.3657 | | 0.0479 | 28.0 | 3500 | 0.4657 | 0.3422 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_100_Epochs
jfarray
2022-02-13T00:33:38Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 110, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_50_Epochs
jfarray
2022-02-12T23:39:31Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 50, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 55, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_10_Epochs
jfarray
2022-02-12T22:32:17Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 11, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs
jfarray
2022-02-12T22:09:20Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs
jfarray
2022-02-12T21:48:20Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_100_Epochs
jfarray
2022-02-12T21:38:44Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 110, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_distiluse-base-multilingual-cased-v1_100_Epochs
jfarray
2022-02-12T19:45:48Z
137
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 100, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 110, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_distiluse-base-multilingual-cased-v1_10_Epochs
jfarray
2022-02-12T13:53:59Z
140
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 11, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jfarray/Model_distiluse-base-multilingual-cased-v1_5_Epochs
jfarray
2022-02-12T13:43:01Z
131
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 11 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 6, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ArBert/roberta-base-finetuned-ner-agglo-twitter
ArBert
2022-02-12T11:40:08Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: roberta-base-finetuned-ner-agglo-twitter results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-agglo-twitter This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Precision: 0.6885 - Recall: 0.7665 - F1: 0.7254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 | | No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 | | 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 | | 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 | | 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 | | 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 | | 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 | | 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 | | 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 | | 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 | | 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 | | 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 | | 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 | | 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 | | 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 | | 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 | | 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 | | 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 | | 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 | | 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
HHousen/household-rooms
HHousen
2022-02-12T06:21:05Z
77
5
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:04Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: household-rooms results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8482142686843872 --- # household-rooms Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bathroom ![bathroom](images/bathroom.jpg) #### bedroom ![bedroom](images/bedroom.jpg) #### dining room ![dining room](images/dining_room.jpg) #### kitchen ![kitchen](images/kitchen.jpg) #### living room ![living room](images/living_room.jpg)
thyagosme/bert-base-uncased-finetuned-swag
thyagosme
2022-02-12T02:13:46Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 1.0438 - Accuracy: 0.7915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.7708 | 1.0 | 4597 | 0.6025 | 0.7659 | | 0.4015 | 2.0 | 9194 | 0.6287 | 0.7841 | | 0.1501 | 3.0 | 13791 | 1.0438 | 0.7915 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jimypbr/bert-base-uncased-squad
jimypbr
2022-02-11T22:28:31Z
17
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # BERT-Base Uncased SQuADv1 `bert-base-uncased` trained on question answering with `squad`. Evalulation scores: ``` ***** eval metrics ***** epoch = 3.0 eval_exact_match = 80.6906 eval_f1 = 88.1129 eval_samples = 10784 ```
speech-seq2seq/wav2vec2-2-gpt2-medium
speech-seq2seq
2022-02-11T22:26:54Z
13
1
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 3.5264 - Wer: 1.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.4032 | 0.28 | 500 | 4.6724 | 1.9406 | | 4.6417 | 0.56 | 1000 | 4.7143 | 1.8874 | | 4.5725 | 0.84 | 1500 | 4.6413 | 1.9451 | | 4.0178 | 1.12 | 2000 | 4.5470 | 1.8861 | | 3.9084 | 1.4 | 2500 | 4.4360 | 1.8881 | | 3.9297 | 1.68 | 3000 | 4.2814 | 1.8652 | | 3.707 | 1.96 | 3500 | 4.1035 | 1.8320 | | 3.1373 | 2.24 | 4000 | 3.9557 | 1.7762 | | 3.3152 | 2.52 | 4500 | 3.7737 | 1.7454 | | 2.9501 | 2.8 | 5000 | 3.5264 | 1.7073 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
huggingtweets/sauce__world
huggingtweets
2022-02-11T22:14:53Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sauce__world/1644617665459/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1488960307305218049/nAFuBERK_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">poolboy sauce world</div> <div style="text-align: center; font-size: 14px;">@sauce__world</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from poolboy sauce world. | Data | poolboy sauce world | | --- | --- | | Tweets downloaded | 3192 | | Retweets | 323 | | Short tweets | 513 | | Tweets kept | 2356 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20dtxww4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sauce__world's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sauce__world') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
BigSalmon/InformalToFormalLincoln21
BigSalmon
2022-02-11T21:24:42Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Informal to Formal: Wordy to Concise: Fill Missing Phrase: ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21") model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln21") ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time) ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ```` ``` infill: increasing the number of sidewalks in suburban areas will [MASK]. Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ). infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago. infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly. Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly. infill: ``` ``` *** wordy: chancing upon a linux user is a rare occurrence in the present day. Translate into Concise Text: present-day linux users are rare. *** wordy: an interest in classical music is becoming more and more less popular. Translate into Concise Text: classical music appreciation is dwindling. Translate into Concise Text: waning interest in classic music persists. Translate into Concise Text: interest in classic music is fading. *** wordy: the ice cream was only one dollar, but it was not a good value for the size. Translate into Concise Text: the one dollar ice cream was overpriced for its size. Translate into Concise Text: overpriced, the one dollar ice cream was small. *** wordy: ```
ArBert/bert-base-uncased-finetuned-ner-kmeans
ArBert
2022-02-11T16:45:09Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased-finetuned-ner-kmeans results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner-kmeans This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 - Precision: 0.9084 - Recall: 0.9245 - F1: 0.9164 - Accuracy: 0.9792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 | | 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 | | 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
emre/wav2vec2-xls-r-300m-hy-AM-CV8-v1
emre
2022-02-11T15:29:46Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - robust-speech-event datasets: - common_voice model-index: - name: wav2vec2-xls-r-300m-hy-AM-CV8-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-hy-AM-CV8-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9145 - Wer: 0.9598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 170 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 5.7132 | 83.31 | 500 | 1.9274 | 1.0523 | | 1.017 | 166.62 | 1000 | 0.9145 | 0.9598 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
sshasnain/wav2vec2-xls-r-300m-bangla-command
sshasnain
2022-02-11T13:10:44Z
7
2
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "bn", "audio", "speech", "dataset:custom", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: Bengali datasets: - custom metrics: - wer tags: - bn - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: wav2vec2-xls-r-300m-bangla-command results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: custom type: custom args: ben metrics: - name: Test WER type: wer value: 0.006 --- # wav2vec2-xls-r-300m-bangla-command *** ## Usage Commands '৫ টা কলম দেন' 'চেয়ারটা কোথায় রেখেছেন' 'ডানের বালতিটার প্রাইজ কেমন' 'দশ কেজি আলু কত' 'বাজুসের ল্যাপটপটা এসেছে' 'বাসার জন্য দরজা আছে' 'ম্যাম মোবাইলটা কি আছে' 'হ্যালো শ্যাম্পুর দাম বল'
csikasote/wav2vec2-large-xls-r-1b-bemba-fds
csikasote
2022-02-11T12:28:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "bem", "robust-speech-event", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - bem - robust-speech-event model-index: - name: wav2vec2-large-xls-r-1b-bemba-fds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-bemba-fds This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset. It achieves the following results on the evaluation set: - Loss: 0.2898 - Wer: 0.3435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7986 | 0.34 | 500 | 0.4549 | 0.7292 | | 0.5358 | 0.67 | 1000 | 0.3325 | 0.4491 | | 0.4559 | 1.01 | 1500 | 0.3090 | 0.3954 | | 0.3983 | 1.35 | 2000 | 0.3067 | 0.4105 | | 0.4067 | 1.68 | 2500 | 0.2838 | 0.3678 | | 0.3722 | 2.02 | 3000 | 0.2824 | 0.3762 | | 0.3286 | 2.36 | 3500 | 0.2810 | 0.3670 | | 0.3239 | 2.69 | 4000 | 0.2643 | 0.3501 | | 0.3187 | 3.03 | 4500 | 0.2838 | 0.3754 | | 0.2801 | 3.36 | 5000 | 0.2815 | 0.3507 | | 0.2806 | 3.7 | 5500 | 0.2725 | 0.3486 | | 0.2714 | 4.04 | 6000 | 0.2898 | 0.3435 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
huggingtweets/albinkurti
huggingtweets
2022-02-11T11:38:45Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/albinkurti/1644579521299/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1425007522067386368/k0GygSdD_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Albin Kurti</div> <div style="text-align: center; font-size: 14px;">@albinkurti</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Albin Kurti. | Data | Albin Kurti | | --- | --- | | Tweets downloaded | 741 | | Retweets | 32 | | Short tweets | 11 | | Tweets kept | 698 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yhql26z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @albinkurti's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/txe5baun) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/txe5baun/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/albinkurti') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
medA/autonlp-FR_another_test-565016091
medA
2022-02-11T11:08:02Z
3
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "autonlp", "fr", "dataset:medA/autonlp-data-FR_another_test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: fr widget: - text: "I love AutoNLP 🤗" datasets: - medA/autonlp-data-FR_another_test co2_eq_emissions: 70.54639641012226 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 565016091 - CO2 Emissions (in grams): 70.54639641012226 ## Validation Metrics - Loss: 0.5170354247093201 - Accuracy: 0.8545909432074056 - Macro F1: 0.7910662503820883 - Micro F1: 0.8545909432074056 - Weighted F1: 0.8539837213761081 - Macro Precision: 0.8033640381948799 - Micro Precision: 0.8545909432074056 - Weighted Precision: 0.856160322286008 - Macro Recall: 0.7841845637031052 - Micro Recall: 0.8545909432074056 - Weighted Recall: 0.8545909432074056 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/medA/autonlp-FR_another_test-565016091 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("medA/autonlp-FR_another_test-565016091", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
mvip/wav2vec2-large-xls-r-300m-tr
mvip
2022-02-11T10:58:45Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-tr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4074 - Wer: 0.4227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9399 | 4.21 | 400 | 0.7252 | 0.7387 | | 0.4147 | 8.42 | 800 | 0.4693 | 0.5201 | | 0.1855 | 12.63 | 1200 | 0.4584 | 0.4848 | | 0.1256 | 16.84 | 1600 | 0.4464 | 0.4708 | | 0.0948 | 21.05 | 2000 | 0.4261 | 0.4389 | | 0.0714 | 25.26 | 2400 | 0.4331 | 0.4349 | | 0.0532 | 29.47 | 2800 | 0.4074 | 0.4227 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
edbeeching/test-trainer-to-hub
edbeeching
2022-02-11T10:36:07Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: test-trainer-to-hub results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8455882352941176 - name: F1 type: f1 value: 0.893760539629005 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer-to-hub This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7352 - Accuracy: 0.8456 - F1: 0.8938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.4489 | 0.8235 | 0.8792 | | 0.5651 | 2.0 | 918 | 0.4885 | 0.8260 | 0.8811 | | 0.3525 | 3.0 | 1377 | 0.7352 | 0.8456 | 0.8938 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
shahukareem/wav2vec2-xls-r-1b-dv
shahukareem
2022-02-11T08:15:25Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "dv", "robust-speech-event", "model_for_talk", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - dv - robust-speech-event - model_for_talk datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-xls-r-1b-dv results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: dv metrics: - name: Test WER type: wer value: 21.32 - name: Test CER type: cer value: 3.43 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-1b-dv This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1702 - Wer: 0.2123 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.8412 | 0.66 | 400 | 0.7160 | 0.7913 | | 0.6832 | 1.33 | 800 | 0.3401 | 0.5268 | | 0.4624 | 1.99 | 1200 | 0.2671 | 0.4683 | | 0.3832 | 2.65 | 1600 | 0.2395 | 0.4410 | | 0.3443 | 3.32 | 2000 | 0.2410 | 0.4296 | | 0.324 | 3.98 | 2400 | 0.2302 | 0.4143 | | 0.2934 | 4.64 | 2800 | 0.2402 | 0.4136 | | 0.2773 | 5.31 | 3200 | 0.2134 | 0.4088 | | 0.2638 | 5.97 | 3600 | 0.2072 | 0.4037 | | 0.2479 | 6.63 | 4000 | 0.2036 | 0.3876 | | 0.2424 | 7.3 | 4400 | 0.2037 | 0.3767 | | 0.2249 | 7.96 | 4800 | 0.1959 | 0.3802 | | 0.2169 | 8.62 | 5200 | 0.1943 | 0.3813 | | 0.2109 | 9.29 | 5600 | 0.1944 | 0.3691 | | 0.1991 | 9.95 | 6000 | 0.1870 | 0.3589 | | 0.1917 | 10.61 | 6400 | 0.1834 | 0.3485 | | 0.1862 | 11.28 | 6800 | 0.1857 | 0.3486 | | 0.1744 | 11.94 | 7200 | 0.1812 | 0.3330 | | 0.171 | 12.6 | 7600 | 0.1797 | 0.3436 | | 0.1599 | 13.27 | 8000 | 0.1839 | 0.3319 | | 0.1597 | 13.93 | 8400 | 0.1737 | 0.3385 | | 0.1494 | 14.59 | 8800 | 0.1807 | 0.3239 | | 0.1444 | 15.26 | 9200 | 0.1750 | 0.3155 | | 0.1382 | 15.92 | 9600 | 0.1705 | 0.3084 | | 0.1299 | 16.58 | 10000 | 0.1777 | 0.2999 | | 0.1306 | 17.25 | 10400 | 0.1765 | 0.3056 | | 0.1239 | 17.91 | 10800 | 0.1676 | 0.2864 | | 0.1149 | 18.57 | 11200 | 0.1774 | 0.2861 | | 0.1134 | 19.24 | 11600 | 0.1654 | 0.2699 | | 0.1101 | 19.9 | 12000 | 0.1621 | 0.2651 | | 0.1038 | 20.56 | 12400 | 0.1686 | 0.2610 | | 0.1038 | 21.23 | 12800 | 0.1722 | 0.2559 | | 0.0988 | 21.89 | 13200 | 0.1708 | 0.2486 | | 0.0949 | 22.55 | 13600 | 0.1696 | 0.2453 | | 0.0913 | 23.22 | 14000 | 0.1677 | 0.2424 | | 0.0879 | 23.88 | 14400 | 0.1640 | 0.2359 | | 0.0888 | 24.54 | 14800 | 0.1697 | 0.2347 | | 0.0826 | 25.21 | 15200 | 0.1709 | 0.2314 | | 0.0819 | 25.87 | 15600 | 0.1679 | 0.2256 | | 0.0793 | 26.53 | 16000 | 0.1701 | 0.2214 | | 0.0773 | 27.2 | 16400 | 0.1682 | 0.2176 | | 0.0783 | 27.86 | 16800 | 0.1685 | 0.2165 | | 0.074 | 28.52 | 17200 | 0.1688 | 0.2155 | | 0.0753 | 29.19 | 17600 | 0.1695 | 0.2110 | | 0.0699 | 29.85 | 18000 | 0.1702 | 0.2123 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8
lgris
2022-02-10T23:23:59Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: wav2vec2-large-xlsr-coraa-portuguese-cv8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-coraa-portuguese-cv8 This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1626 - Wer: 0.1365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5614 | 0.1 | 100 | 0.2542 | 0.1986 | | 0.5181 | 0.19 | 200 | 0.2740 | 0.2146 | | 0.5056 | 0.29 | 300 | 0.2472 | 0.2068 | | 0.4747 | 0.39 | 400 | 0.2464 | 0.2166 | | 0.4627 | 0.48 | 500 | 0.2277 | 0.2041 | | 0.4403 | 0.58 | 600 | 0.2245 | 0.1977 | | 0.4413 | 0.68 | 700 | 0.2156 | 0.1968 | | 0.437 | 0.77 | 800 | 0.2102 | 0.1919 | | 0.4305 | 0.87 | 900 | 0.2130 | 0.1864 | | 0.4324 | 0.97 | 1000 | 0.2144 | 0.1902 | | 0.4217 | 1.06 | 1100 | 0.2230 | 0.1891 | | 0.3823 | 1.16 | 1200 | 0.2033 | 0.1774 | | 0.3641 | 1.25 | 1300 | 0.2143 | 0.1830 | | 0.3707 | 1.35 | 1400 | 0.2034 | 0.1793 | | 0.3767 | 1.45 | 1500 | 0.2029 | 0.1823 | | 0.3483 | 1.54 | 1600 | 0.1999 | 0.1740 | | 0.3577 | 1.64 | 1700 | 0.1928 | 0.1728 | | 0.3667 | 1.74 | 1800 | 0.1898 | 0.1726 | | 0.3283 | 1.83 | 1900 | 0.1920 | 0.1688 | | 0.3571 | 1.93 | 2000 | 0.1904 | 0.1649 | | 0.3467 | 2.03 | 2100 | 0.1994 | 0.1648 | | 0.3145 | 2.12 | 2200 | 0.1940 | 0.1682 | | 0.3186 | 2.22 | 2300 | 0.1879 | 0.1571 | | 0.3058 | 2.32 | 2400 | 0.1975 | 0.1678 | | 0.3096 | 2.41 | 2500 | 0.1877 | 0.1589 | | 0.2964 | 2.51 | 2600 | 0.1862 | 0.1568 | | 0.3068 | 2.61 | 2700 | 0.1809 | 0.1588 | | 0.3036 | 2.7 | 2800 | 0.1769 | 0.1573 | | 0.3084 | 2.8 | 2900 | 0.1836 | 0.1524 | | 0.3109 | 2.9 | 3000 | 0.1807 | 0.1519 | | 0.2969 | 2.99 | 3100 | 0.1851 | 0.1516 | | 0.2698 | 3.09 | 3200 | 0.1737 | 0.1490 | | 0.2703 | 3.19 | 3300 | 0.1759 | 0.1457 | | 0.2759 | 3.28 | 3400 | 0.1778 | 0.1471 | | 0.2728 | 3.38 | 3500 | 0.1717 | 0.1462 | | 0.2398 | 3.47 | 3600 | 0.1767 | 0.1451 | | 0.256 | 3.57 | 3700 | 0.1742 | 0.1410 | | 0.2712 | 3.67 | 3800 | 0.1674 | 0.1414 | | 0.2648 | 3.76 | 3900 | 0.1717 | 0.1423 | | 0.2576 | 3.86 | 4000 | 0.1672 | 0.1403 | | 0.2504 | 3.96 | 4100 | 0.1683 | 0.1381 | | 0.2406 | 4.05 | 4200 | 0.1685 | 0.1399 | | 0.2403 | 4.15 | 4300 | 0.1656 | 0.1381 | | 0.2233 | 4.25 | 4400 | 0.1687 | 0.1371 | | 0.2546 | 4.34 | 4500 | 0.1642 | 0.1377 | | 0.2431 | 4.44 | 4600 | 0.1655 | 0.1372 | | 0.2337 | 4.54 | 4700 | 0.1625 | 0.1370 | | 0.2607 | 4.63 | 4800 | 0.1618 | 0.1363 | | 0.2292 | 4.73 | 4900 | 0.1622 | 0.1366 | | 0.2232 | 4.83 | 5000 | 0.1626 | 0.1365 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
lgris/WavLM-large-CORAA-pt
lgris
2022-02-10T23:21:45Z
12
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "generated_from_trainer", "pt", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - pt license: apache-2.0 tags: - generated_from_trainer - pt model-index: - name: WavLM-large-CORAA-pt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WavLM-large-CORAA-pt This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA). It achieves the following results on the evaluation set: - Loss: 0.6144 - Wer: 0.3840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.04 | 1000 | 1.9230 | 0.9960 | | 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 | | 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 | | 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 | | 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 | | 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 | | 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 | | 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 | | 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 | | 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 | | 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 | | 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 | | 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 | | 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 | | 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 | | 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 | | 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 | | 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 | | 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 | | 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 | | 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 | | 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 | | 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 | | 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 | | 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 | | 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 | | 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 | | 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 | | 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 | | 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 | | 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 | | 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 | | 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 | | 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 | | 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 | | 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 | | 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 | | 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 | | 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 | | 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
lgris/wavlm-large-CORAA-pt-cv7
lgris
2022-02-10T23:16:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wavlm", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer - pt datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: wavlm-large-CORAA-pt-cv7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-large-CORAA-pt-cv7 This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2546 - Wer: 0.2261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 | | 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 | | 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 | | 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 | | 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 | | 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 | | 0.461 | 0.91 | 700 | 0.3047 | 0.2974 | | 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 | | 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 | | 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 | | 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 | | 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 | | 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 | | 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 | | 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 | | 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 | | 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 | | 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 | | 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 | | 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 | | 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 | | 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 | | 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 | | 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 | | 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 | | 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 | | 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 | | 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 | | 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 | | 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 | | 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 | | 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 | | 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 | | 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 | | 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 | | 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 | | 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 | | 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 | | 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 | | 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 | | 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 | | 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 | | 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 | | 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 | | 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 | | 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 | | 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 | | 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 | | 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 | | 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
emre/wav2vec2-large-xlsr-53-W2V2-TR-MED
emre
2022-02-10T22:55:21Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - robust-speech-event datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-53-W2V2-TR-MED results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-W2V2-TR-MED This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4467 - Wer: 0.4598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1343 | 4.21 | 400 | 2.3674 | 1.0372 | | 0.8075 | 8.42 | 800 | 0.4583 | 0.6308 | | 0.3209 | 12.63 | 1200 | 0.4291 | 0.5531 | | 0.2273 | 16.84 | 1600 | 0.4348 | 0.5378 | | 0.1764 | 21.05 | 2000 | 0.4550 | 0.5326 | | 0.148 | 25.26 | 2400 | 0.4839 | 0.5319 | | 0.1268 | 29.47 | 2800 | 0.4515 | 0.5070 | | 0.1113 | 33.68 | 3200 | 0.4590 | 0.4930 | | 0.1025 | 37.89 | 3600 | 0.4546 | 0.4888 | | 0.0922 | 42.11 | 4000 | 0.4782 | 0.4852 | | 0.082 | 46.32 | 4400 | 0.4605 | 0.4752 | | 0.0751 | 50.53 | 4800 | 0.4358 | 0.4689 | | 0.0699 | 54.74 | 5200 | 0.4359 | 0.4629 | | 0.0633 | 58.95 | 5600 | 0.4467 | 0.4598 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
squish/BertHarmon
squish
2022-02-10T21:28:51Z
6
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- thumbnail: "https://en.memesrandom.com/wp-content/uploads/2020/11/juega-ajedrez.jpeg" widget: - text: "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]" - example_title: Empty Board - text: "6Q1/5k2/3P4/1R3p2/P4P2/7Q/6RK/8 b - - 2 60 Black <MOVE_SEP> [MASK]" - example_title: Late Game Board --- # BertHarmon Research done at Johns Hopkins University by Michael DeLeo Contact: [email protected] ![iu-13](logo.png) ## Introduction BertHarmon is a BERT model trained for the task of Chess. ![IMG_0145](chess-example.GIF) ## Sample Usage ```python from transformers import pipeline task = pipeline('fill-mask', model='squish/BertHarmon') task("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]") ``` The base string consists of the FEN_position followed by the player color and a move seperator. Finally with the [MASK] token. The mask token is the algebraic notation for a chess move to be taken givent the current board state in FEN Notation ## Links [Github](https://github.com/deleomike/NLP-Chess) [HuggingFace](https://huggingface.co/squish/BertHarmon)
huggingtweets/realsophiarobot
huggingtweets
2022-02-10T20:03:13Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/realsophiarobot/1644523350998/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1489664916508524545/ePAeH8lT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sophia the Robot</div> <div style="text-align: center; font-size: 14px;">@realsophiarobot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sophia the Robot. | Data | Sophia the Robot | | --- | --- | | Tweets downloaded | 2341 | | Retweets | 313 | | Short tweets | 99 | | Tweets kept | 1929 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rfk5yso3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realsophiarobot's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/realsophiarobot') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
skhurana/test_model
skhurana
2022-02-10T16:28:36Z
0
0
null
[ "pytorch", "region:us" ]
null
2022-03-02T23:29:05Z
# Hugging-face testing --- language: - "List of ISO 639-1 code for your language" - lang1 - lang2 thumbnail: "url to a thumbnail used in social sharing" tags: - PyTorch license: apache-2.0 datasets: - dataset1 - dataset2 metrics: - metric1 ---
huggingtweets/jpbrammer
huggingtweets
2022-02-10T15:50:29Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/jpbrammer/1644508224660/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1190049285842329600/qwCL5mdU_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">JP</div> <div style="text-align: center; font-size: 14px;">@jpbrammer</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from JP. | Data | JP | | --- | --- | | Tweets downloaded | 3206 | | Retweets | 938 | | Short tweets | 345 | | Tweets kept | 1923 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13lk57y6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jpbrammer's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jpbrammer') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new
ajaiswal1008
2022-02-10T15:11:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hi-colab_new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hi-colab_new This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
SetFit/deberta-v3-large__sst2__train-8-9
SetFit
2022-02-10T10:10:14Z
4
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-8-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-9 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6013 - Accuracy: 0.7210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6757 | 1.0 | 3 | 0.7810 | 0.25 | | 0.6506 | 2.0 | 6 | 0.8102 | 0.25 | | 0.6463 | 3.0 | 9 | 0.8313 | 0.25 | | 0.5813 | 4.0 | 12 | 0.8858 | 0.25 | | 0.4635 | 5.0 | 15 | 0.8220 | 0.25 | | 0.3992 | 6.0 | 18 | 0.7226 | 0.5 | | 0.3281 | 7.0 | 21 | 0.6707 | 0.75 | | 0.2276 | 8.0 | 24 | 0.7515 | 0.75 | | 0.1674 | 9.0 | 27 | 0.6971 | 0.75 | | 0.0873 | 10.0 | 30 | 0.5419 | 0.75 | | 0.0525 | 11.0 | 33 | 0.5025 | 0.75 | | 0.0286 | 12.0 | 36 | 0.5229 | 0.75 | | 0.0149 | 13.0 | 39 | 0.5660 | 0.75 | | 0.0082 | 14.0 | 42 | 0.6954 | 0.75 | | 0.006 | 15.0 | 45 | 0.8649 | 0.75 | | 0.0043 | 16.0 | 48 | 1.0011 | 0.75 | | 0.0035 | 17.0 | 51 | 1.0909 | 0.75 | | 0.0021 | 18.0 | 54 | 1.1615 | 0.75 | | 0.0017 | 19.0 | 57 | 1.2147 | 0.75 | | 0.0013 | 20.0 | 60 | 1.2585 | 0.75 | | 0.0016 | 21.0 | 63 | 1.2917 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/deberta-v3-large__sst2__train-8-8
SetFit
2022-02-10T09:59:57Z
5
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-8-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-8 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7414 - Accuracy: 0.5623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6597 | 1.0 | 3 | 0.7716 | 0.25 | | 0.6376 | 2.0 | 6 | 0.7802 | 0.25 | | 0.5857 | 3.0 | 9 | 0.6625 | 0.75 | | 0.4024 | 4.0 | 12 | 0.5195 | 0.75 | | 0.2635 | 5.0 | 15 | 0.4222 | 1.0 | | 0.1714 | 6.0 | 18 | 0.4410 | 0.5 | | 0.1267 | 7.0 | 21 | 0.7773 | 0.75 | | 0.0582 | 8.0 | 24 | 0.9070 | 0.75 | | 0.0374 | 9.0 | 27 | 0.9539 | 0.75 | | 0.0204 | 10.0 | 30 | 1.0507 | 0.75 | | 0.012 | 11.0 | 33 | 1.2802 | 0.5 | | 0.0086 | 12.0 | 36 | 1.4272 | 0.5 | | 0.0049 | 13.0 | 39 | 1.4803 | 0.5 | | 0.0039 | 14.0 | 42 | 1.4912 | 0.5 | | 0.0031 | 15.0 | 45 | 1.5231 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/deberta-v3-large__sst2__train-8-7
SetFit
2022-02-10T09:52:48Z
5
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-8-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-7 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7037 - Accuracy: 0.5008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6864 | 1.0 | 3 | 0.7800 | 0.25 | | 0.6483 | 2.0 | 6 | 0.8067 | 0.25 | | 0.6028 | 3.0 | 9 | 0.8500 | 0.25 | | 0.4086 | 4.0 | 12 | 1.0661 | 0.25 | | 0.2923 | 5.0 | 15 | 1.2302 | 0.25 | | 0.2059 | 6.0 | 18 | 1.0312 | 0.5 | | 0.1238 | 7.0 | 21 | 1.1271 | 0.5 | | 0.0711 | 8.0 | 24 | 1.3100 | 0.5 | | 0.0453 | 9.0 | 27 | 1.4208 | 0.5 | | 0.0198 | 10.0 | 30 | 1.5988 | 0.5 | | 0.0135 | 11.0 | 33 | 1.9174 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/deberta-v3-large__sst2__train-8-4
SetFit
2022-02-10T09:02:04Z
6
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-8-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3023 - Accuracy: 0.7057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6816 | 1.0 | 3 | 0.8072 | 0.25 | | 0.6672 | 2.0 | 6 | 0.8740 | 0.25 | | 0.6667 | 3.0 | 9 | 0.8578 | 0.25 | | 0.5346 | 4.0 | 12 | 1.0353 | 0.25 | | 0.4517 | 5.0 | 15 | 1.1030 | 0.25 | | 0.3095 | 6.0 | 18 | 0.9986 | 0.25 | | 0.2464 | 7.0 | 21 | 0.9286 | 0.5 | | 0.1342 | 8.0 | 24 | 0.4063 | 1.0 | | 0.0851 | 9.0 | 27 | 0.2210 | 1.0 | | 0.0491 | 10.0 | 30 | 0.2302 | 1.0 | | 0.0211 | 11.0 | 33 | 0.4020 | 0.75 | | 0.017 | 12.0 | 36 | 0.2382 | 1.0 | | 0.0084 | 13.0 | 39 | 0.0852 | 1.0 | | 0.0051 | 14.0 | 42 | 0.0354 | 1.0 | | 0.0047 | 15.0 | 45 | 0.0208 | 1.0 | | 0.0029 | 16.0 | 48 | 0.0155 | 1.0 | | 0.0022 | 17.0 | 51 | 0.0139 | 1.0 | | 0.0019 | 18.0 | 54 | 0.0144 | 1.0 | | 0.0016 | 19.0 | 57 | 0.0168 | 1.0 | | 0.0013 | 20.0 | 60 | 0.0231 | 1.0 | | 0.0011 | 21.0 | 63 | 0.0369 | 1.0 | | 0.0009 | 22.0 | 66 | 0.0528 | 1.0 | | 0.001 | 23.0 | 69 | 0.0639 | 1.0 | | 0.0009 | 24.0 | 72 | 0.0670 | 1.0 | | 0.0009 | 25.0 | 75 | 0.0526 | 1.0 | | 0.0008 | 26.0 | 78 | 0.0425 | 1.0 | | 0.0011 | 27.0 | 81 | 0.0135 | 1.0 | | 0.0007 | 28.0 | 84 | 0.0076 | 1.0 | | 0.0007 | 29.0 | 87 | 0.0057 | 1.0 | | 0.0007 | 30.0 | 90 | 0.0049 | 1.0 | | 0.0008 | 31.0 | 93 | 0.0045 | 1.0 | | 0.0007 | 32.0 | 96 | 0.0044 | 1.0 | | 0.0008 | 33.0 | 99 | 0.0043 | 1.0 | | 0.0005 | 34.0 | 102 | 0.0044 | 1.0 | | 0.0006 | 35.0 | 105 | 0.0045 | 1.0 | | 0.0006 | 36.0 | 108 | 0.0046 | 1.0 | | 0.0007 | 37.0 | 111 | 0.0048 | 1.0 | | 0.0006 | 38.0 | 114 | 0.0049 | 1.0 | | 0.0005 | 39.0 | 117 | 0.0050 | 1.0 | | 0.0005 | 40.0 | 120 | 0.0050 | 1.0 | | 0.0004 | 41.0 | 123 | 0.0051 | 1.0 | | 0.0005 | 42.0 | 126 | 0.0051 | 1.0 | | 0.0004 | 43.0 | 129 | 0.0051 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/deberta-v3-large__sst2__train-8-3
SetFit
2022-02-10T08:43:40Z
4
0
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: deberta-v3-large__sst2__train-8-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-3 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6421 - Accuracy: 0.6310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6696 | 1.0 | 3 | 0.7917 | 0.25 | | 0.6436 | 2.0 | 6 | 0.8107 | 0.25 | | 0.6923 | 3.0 | 9 | 0.8302 | 0.25 | | 0.5051 | 4.0 | 12 | 0.9828 | 0.25 | | 0.3688 | 5.0 | 15 | 0.7402 | 0.25 | | 0.2671 | 6.0 | 18 | 0.5820 | 0.75 | | 0.1935 | 7.0 | 21 | 0.8356 | 0.5 | | 0.0815 | 8.0 | 24 | 1.0431 | 0.25 | | 0.0591 | 9.0 | 27 | 0.9679 | 0.75 | | 0.0276 | 10.0 | 30 | 1.0659 | 0.75 | | 0.0175 | 11.0 | 33 | 0.9689 | 0.75 | | 0.0152 | 12.0 | 36 | 0.8820 | 0.75 | | 0.006 | 13.0 | 39 | 0.8337 | 0.75 | | 0.0041 | 14.0 | 42 | 0.7650 | 0.75 | | 0.0036 | 15.0 | 45 | 0.6960 | 0.75 | | 0.0034 | 16.0 | 48 | 0.6548 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
NbAiLab/Wav2Vec-Template
NbAiLab
2022-02-10T08:37:20Z
0
0
null
[ "automatic-speech-recognition", "NbAiLab/NPSC", "xxx-robust-speech-event", "no", "nb-NO", "dataset:NbAiLab/NPSC", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - automatic-speech-recognition - NbAiLab/NPSC - xxx-robust-speech-event - no - nb-NO datasets: - NbAiLab/NPSC language: - nb-NO model-index: - name: wav2vec2-xls-r-1b-npsc-bokmaal-low-27k results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: NPSC type: NbAiLab/NPSC args: 16K_mp3_bokmaal metrics: - name: Test (Bokmål) WER type: wer value: 0.06686424124625939 - name: Test (Bokmål) CER type: cer value: 0.025697763468940576 --- # Norwegian Wav2Vec2 Model - 1B - Bokmål This achieves the following results on the test set with a 5-gram KenLM: - WER: 0.0668 - CER: 0.0256 Without using a language model, we are getting these results: - WER: ??? - CER: ??? ## Model description This is one of several Wav2Vec-models created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). In parallell with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://huggingface.co/datasets/NbAiLab/NPSC) to the 🤗 Dataset format and used that as the main source for training. We do release all code developed during the event so that the Norwegian NLP community can build upon this to develop even better Norwegian ASR models. The finetuning of these models are not very compute demanding. You should after following the instructions here, be able to train your own automatic speech recognition system in less than a day with an average GPU. ## Team The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen. ## Training procedure To reproduce these results, we strongly recommend that you follow the [instructions from HuggingFace](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model. When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck! ### Language Model As you see from the results above, adding even a simple 5-gram language will significantly improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model). ### Parameters The following parameters were used during training: ``` --dataset_name="NbAiLab/NPSC" --model_name_or_path="facebook/wav2vec2-xls-r-1b" --dataset_config_name="16K_mp3_bokmaal" --output_dir="./" --overwrite_output_dir --num_train_epochs="40" --per_device_train_batch_size="12" --per_device_eval_batch_size="12" --gradient_accumulation_steps="2" --learning_rate="2e-5" --warmup_steps="2000" --length_column_name="input_length" --evaluation_strategy="steps" --text_column_name="text" --save_steps="500" --eval_steps="500" --logging_steps="100" --layerdrop="0.041" --attention_dropout="0.094" --activation_dropout="0.055" --hidden_dropout="0.047" --save_total_limit="3" --freeze_feature_encoder --feat_proj_dropout="0.04" --mask_time_prob="0.082" --mask_time_length="10" --mask_feature_prob="0.25" --mask_feature_length="64" --gradient_checkpointing --min_duration_in_seconds="0.5" --max_duration_in_seconds="30.0" --ctc_zero_infinity=True --use_auth_token --seed="42" --fp16 --group_by_length --do_train --do_eval --push_to_hub --preprocessing_num_workers="16" ``` Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters | Parameter| Comment | |:-------------|:-----| | per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system | |gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues | | learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability | | epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9
SetFit
2022-02-10T08:11:34Z
5
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-9 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7075 - Accuracy: 0.692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1054 | 1.0 | 19 | 1.0938 | 0.35 | | 1.0338 | 2.0 | 38 | 1.0563 | 0.65 | | 0.8622 | 3.0 | 57 | 0.9372 | 0.6 | | 0.5919 | 4.0 | 76 | 0.8461 | 0.6 | | 0.3357 | 5.0 | 95 | 1.0206 | 0.45 | | 0.1621 | 6.0 | 114 | 0.9802 | 0.7 | | 0.0637 | 7.0 | 133 | 1.2434 | 0.65 | | 0.0261 | 8.0 | 152 | 1.3865 | 0.65 | | 0.0156 | 9.0 | 171 | 1.4414 | 0.7 | | 0.01 | 10.0 | 190 | 1.5502 | 0.7 | | 0.0079 | 11.0 | 209 | 1.6102 | 0.7 | | 0.0062 | 12.0 | 228 | 1.6525 | 0.7 | | 0.0058 | 13.0 | 247 | 1.6884 | 0.7 | | 0.0046 | 14.0 | 266 | 1.7479 | 0.7 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-8
SetFit
2022-02-10T08:10:22Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9191 - Accuracy: 0.632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1008 | 1.0 | 19 | 1.0877 | 0.4 | | 1.0354 | 2.0 | 38 | 1.0593 | 0.35 | | 0.8765 | 3.0 | 57 | 0.9722 | 0.5 | | 0.6365 | 4.0 | 76 | 0.9271 | 0.55 | | 0.3944 | 5.0 | 95 | 0.7852 | 0.5 | | 0.2219 | 6.0 | 114 | 0.9360 | 0.55 | | 0.126 | 7.0 | 133 | 1.0610 | 0.55 | | 0.0389 | 8.0 | 152 | 1.0884 | 0.6 | | 0.0191 | 9.0 | 171 | 1.3483 | 0.55 | | 0.0108 | 10.0 | 190 | 1.4226 | 0.55 | | 0.0082 | 11.0 | 209 | 1.4270 | 0.55 | | 0.0065 | 12.0 | 228 | 1.5074 | 0.55 | | 0.0059 | 13.0 | 247 | 1.5577 | 0.55 | | 0.0044 | 14.0 | 266 | 1.5798 | 0.55 | | 0.0042 | 15.0 | 285 | 1.6196 | 0.55 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-6
SetFit
2022-02-10T08:08:00Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0523 - Accuracy: 0.663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0957 | 1.0 | 19 | 1.0696 | 0.6 | | 1.0107 | 2.0 | 38 | 1.0047 | 0.55 | | 0.8257 | 3.0 | 57 | 0.8358 | 0.8 | | 0.6006 | 4.0 | 76 | 0.7641 | 0.6 | | 0.4172 | 5.0 | 95 | 0.5931 | 0.8 | | 0.2639 | 6.0 | 114 | 0.5570 | 0.7 | | 0.1314 | 7.0 | 133 | 0.5017 | 0.65 | | 0.0503 | 8.0 | 152 | 0.3115 | 0.75 | | 0.023 | 9.0 | 171 | 0.4353 | 0.85 | | 0.0128 | 10.0 | 190 | 0.5461 | 0.75 | | 0.0092 | 11.0 | 209 | 0.5045 | 0.8 | | 0.007 | 12.0 | 228 | 0.5014 | 0.8 | | 0.0064 | 13.0 | 247 | 0.5070 | 0.8 | | 0.0049 | 14.0 | 266 | 0.4681 | 0.8 | | 0.0044 | 15.0 | 285 | 0.4701 | 0.8 | | 0.0039 | 16.0 | 304 | 0.4862 | 0.8 | | 0.0036 | 17.0 | 323 | 0.4742 | 0.8 | | 0.0035 | 18.0 | 342 | 0.4652 | 0.8 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4
SetFit
2022-02-10T08:05:22Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7384 - Accuracy: 0.724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1013 | 1.0 | 19 | 1.0733 | 0.55 | | 1.0226 | 2.0 | 38 | 1.0064 | 0.65 | | 0.8539 | 3.0 | 57 | 0.8758 | 0.75 | | 0.584 | 4.0 | 76 | 0.6941 | 0.7 | | 0.2813 | 5.0 | 95 | 0.5151 | 0.7 | | 0.1122 | 6.0 | 114 | 0.4351 | 0.8 | | 0.0432 | 7.0 | 133 | 0.4896 | 0.85 | | 0.0199 | 8.0 | 152 | 0.5391 | 0.85 | | 0.0126 | 9.0 | 171 | 0.5200 | 0.85 | | 0.0085 | 10.0 | 190 | 0.5622 | 0.85 | | 0.0069 | 11.0 | 209 | 0.5950 | 0.85 | | 0.0058 | 12.0 | 228 | 0.6015 | 0.85 | | 0.0053 | 13.0 | 247 | 0.6120 | 0.85 | | 0.0042 | 14.0 | 266 | 0.6347 | 0.85 | | 0.0039 | 15.0 | 285 | 0.6453 | 0.85 | | 0.0034 | 16.0 | 304 | 0.6660 | 0.85 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2
SetFit
2022-02-10T08:02:54Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7136 - Accuracy: 0.679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1052 | 1.0 | 19 | 1.0726 | 0.45 | | 1.0421 | 2.0 | 38 | 1.0225 | 0.5 | | 0.9173 | 3.0 | 57 | 0.9164 | 0.6 | | 0.6822 | 4.0 | 76 | 0.8251 | 0.7 | | 0.4407 | 5.0 | 95 | 0.8908 | 0.5 | | 0.2367 | 6.0 | 114 | 0.6772 | 0.75 | | 0.1145 | 7.0 | 133 | 0.7792 | 0.65 | | 0.0479 | 8.0 | 152 | 1.0657 | 0.6 | | 0.0186 | 9.0 | 171 | 1.2228 | 0.65 | | 0.0111 | 10.0 | 190 | 1.1100 | 0.6 | | 0.0083 | 11.0 | 209 | 1.1991 | 0.65 | | 0.0067 | 12.0 | 228 | 1.2654 | 0.65 | | 0.0061 | 13.0 | 247 | 1.2837 | 0.65 | | 0.0046 | 14.0 | 266 | 1.2860 | 0.6 | | 0.0043 | 15.0 | 285 | 1.3160 | 0.65 | | 0.0037 | 16.0 | 304 | 1.3323 | 0.65 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1
SetFit
2022-02-10T08:01:40Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-32-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0606 - Accuracy: 0.4745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0941 | 1.0 | 19 | 1.1045 | 0.2 | | 0.9967 | 2.0 | 38 | 1.1164 | 0.35 | | 0.8164 | 3.0 | 57 | 1.1570 | 0.4 | | 0.5884 | 4.0 | 76 | 1.2403 | 0.35 | | 0.3322 | 5.0 | 95 | 1.3815 | 0.35 | | 0.156 | 6.0 | 114 | 1.8102 | 0.3 | | 0.0576 | 7.0 | 133 | 2.1439 | 0.4 | | 0.0227 | 8.0 | 152 | 2.4368 | 0.3 | | 0.0133 | 9.0 | 171 | 2.5994 | 0.4 | | 0.009 | 10.0 | 190 | 2.7388 | 0.35 | | 0.0072 | 11.0 | 209 | 2.8287 | 0.35 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-9
SetFit
2022-02-10T07:59:15Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-9 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1121 - Accuracy: 0.16 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1038 | 1.0 | 10 | 1.1243 | 0.1 | | 1.0859 | 2.0 | 20 | 1.1182 | 0.2 | | 1.0234 | 3.0 | 30 | 1.1442 | 0.3 | | 0.9493 | 4.0 | 40 | 1.2239 | 0.1 | | 0.8114 | 5.0 | 50 | 1.2023 | 0.4 | | 0.6464 | 6.0 | 60 | 1.2329 | 0.4 | | 0.4731 | 7.0 | 70 | 1.2971 | 0.5 | | 0.3355 | 8.0 | 80 | 1.3913 | 0.4 | | 0.1268 | 9.0 | 90 | 1.4670 | 0.5 | | 0.0747 | 10.0 | 100 | 1.7961 | 0.4 | | 0.0449 | 11.0 | 110 | 1.8168 | 0.5 | | 0.0307 | 12.0 | 120 | 1.9307 | 0.4 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-7
SetFit
2022-02-10T07:57:08Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9011 - Accuracy: 0.578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0968 | 1.0 | 10 | 1.1309 | 0.0 | | 1.0709 | 2.0 | 20 | 1.1237 | 0.1 | | 0.9929 | 3.0 | 30 | 1.1254 | 0.1 | | 0.878 | 4.0 | 40 | 1.1206 | 0.5 | | 0.7409 | 5.0 | 50 | 1.0831 | 0.1 | | 0.5663 | 6.0 | 60 | 0.9830 | 0.6 | | 0.4105 | 7.0 | 70 | 0.9919 | 0.5 | | 0.2912 | 8.0 | 80 | 1.0472 | 0.6 | | 0.1013 | 9.0 | 90 | 1.1617 | 0.4 | | 0.0611 | 10.0 | 100 | 1.2789 | 0.6 | | 0.039 | 11.0 | 110 | 1.4091 | 0.4 | | 0.0272 | 12.0 | 120 | 1.4974 | 0.4 | | 0.0189 | 13.0 | 130 | 1.4845 | 0.5 | | 0.018 | 14.0 | 140 | 1.4924 | 0.5 | | 0.0131 | 15.0 | 150 | 1.5206 | 0.6 | | 0.0116 | 16.0 | 160 | 1.5858 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6
SetFit
2022-02-10T07:55:56Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8331 - Accuracy: 0.625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0881 | 1.0 | 10 | 1.1248 | 0.1 | | 1.0586 | 2.0 | 20 | 1.1162 | 0.2 | | 0.9834 | 3.0 | 30 | 1.1199 | 0.3 | | 0.9271 | 4.0 | 40 | 1.0740 | 0.3 | | 0.7663 | 5.0 | 50 | 1.0183 | 0.5 | | 0.6042 | 6.0 | 60 | 1.0259 | 0.5 | | 0.4482 | 7.0 | 70 | 0.8699 | 0.7 | | 0.3072 | 8.0 | 80 | 1.0615 | 0.5 | | 0.1458 | 9.0 | 90 | 1.0164 | 0.5 | | 0.0838 | 10.0 | 100 | 1.0620 | 0.5 | | 0.055 | 11.0 | 110 | 1.1829 | 0.5 | | 0.0347 | 12.0 | 120 | 1.2815 | 0.4 | | 0.0244 | 13.0 | 130 | 1.2607 | 0.6 | | 0.0213 | 14.0 | 140 | 1.3695 | 0.5 | | 0.0169 | 15.0 | 150 | 1.4397 | 0.5 | | 0.0141 | 16.0 | 160 | 1.4388 | 0.6 | | 0.0122 | 17.0 | 170 | 1.4242 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4
SetFit
2022-02-10T07:53:38Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0903 - Accuracy: 0.4805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0974 | 1.0 | 10 | 1.1139 | 0.1 | | 1.0637 | 2.0 | 20 | 1.0988 | 0.1 | | 0.9758 | 3.0 | 30 | 1.1013 | 0.1 | | 0.9012 | 4.0 | 40 | 1.0769 | 0.3 | | 0.6993 | 5.0 | 50 | 1.0484 | 0.6 | | 0.5676 | 6.0 | 60 | 1.0223 | 0.6 | | 0.4069 | 7.0 | 70 | 0.9190 | 0.6 | | 0.3192 | 8.0 | 80 | 1.1370 | 0.6 | | 0.1112 | 9.0 | 90 | 1.1728 | 0.6 | | 0.07 | 10.0 | 100 | 1.1998 | 0.6 | | 0.0397 | 11.0 | 110 | 1.3700 | 0.6 | | 0.027 | 12.0 | 120 | 1.3329 | 0.6 | | 0.021 | 13.0 | 130 | 1.2697 | 0.6 | | 0.0177 | 14.0 | 140 | 1.4195 | 0.6 | | 0.0142 | 15.0 | 150 | 1.5342 | 0.6 | | 0.0118 | 16.0 | 160 | 1.5999 | 0.6 | | 0.0108 | 17.0 | 170 | 1.6327 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-3
SetFit
2022-02-10T07:52:27Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0675 - Accuracy: 0.44 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0951 | 1.0 | 10 | 1.1346 | 0.1 | | 1.0424 | 2.0 | 20 | 1.1120 | 0.2 | | 0.957 | 3.0 | 30 | 1.1002 | 0.3 | | 0.7889 | 4.0 | 40 | 1.0838 | 0.4 | | 0.6162 | 5.0 | 50 | 1.0935 | 0.5 | | 0.4849 | 6.0 | 60 | 1.0867 | 0.5 | | 0.3089 | 7.0 | 70 | 1.1145 | 0.5 | | 0.2145 | 8.0 | 80 | 1.1278 | 0.6 | | 0.0805 | 9.0 | 90 | 1.2801 | 0.6 | | 0.0497 | 10.0 | 100 | 1.3296 | 0.6 | | 0.0328 | 11.0 | 110 | 1.2913 | 0.6 | | 0.0229 | 12.0 | 120 | 1.3692 | 0.6 | | 0.0186 | 13.0 | 130 | 1.4642 | 0.6 | | 0.0161 | 14.0 | 140 | 1.5568 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-1
SetFit
2022-02-10T07:50:12Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-16-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-16-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0424 - Accuracy: 0.5355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0989 | 1.0 | 10 | 1.1049 | 0.1 | | 1.0641 | 2.0 | 20 | 1.0768 | 0.3 | | 0.9742 | 3.0 | 30 | 1.0430 | 0.4 | | 0.8765 | 4.0 | 40 | 1.0058 | 0.4 | | 0.6979 | 5.0 | 50 | 0.8488 | 0.7 | | 0.563 | 6.0 | 60 | 0.7221 | 0.7 | | 0.4135 | 7.0 | 70 | 0.6587 | 0.8 | | 0.2509 | 8.0 | 80 | 0.5577 | 0.7 | | 0.0943 | 9.0 | 90 | 0.5840 | 0.7 | | 0.0541 | 10.0 | 100 | 0.6959 | 0.7 | | 0.0362 | 11.0 | 110 | 0.6884 | 0.6 | | 0.0254 | 12.0 | 120 | 0.9263 | 0.6 | | 0.0184 | 13.0 | 130 | 0.7992 | 0.6 | | 0.0172 | 14.0 | 140 | 0.7351 | 0.6 | | 0.0131 | 15.0 | 150 | 0.7664 | 0.6 | | 0.0117 | 16.0 | 160 | 0.8262 | 0.6 | | 0.0101 | 17.0 | 170 | 0.8839 | 0.6 | | 0.0089 | 18.0 | 180 | 0.9018 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner
akshaychaudhary
2022-02-10T07:47:51Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-hypertuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-hypertuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5683 - Precision: 0.3398 - Recall: 0.6481 - F1: 0.4459 - Accuracy: 0.8762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 84 | 0.3566 | 0.2913 | 0.5556 | 0.3822 | 0.8585 | | No log | 2.0 | 168 | 0.4698 | 0.3366 | 0.6296 | 0.4387 | 0.8730 | | No log | 3.0 | 252 | 0.5683 | 0.3398 | 0.6481 | 0.4459 | 0.8762 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-8
SetFit
2022-02-10T07:46:54Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-8-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-8-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0005 - Accuracy: 0.518 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1029 | 1.0 | 5 | 1.1295 | 0.0 | | 1.0472 | 2.0 | 10 | 1.1531 | 0.0 | | 1.054 | 3.0 | 15 | 1.1475 | 0.0 | | 0.9366 | 4.0 | 20 | 1.1515 | 0.0 | | 0.8698 | 5.0 | 25 | 1.1236 | 0.4 | | 0.8148 | 6.0 | 30 | 1.0716 | 0.6 | | 0.6884 | 7.0 | 35 | 1.0662 | 0.6 | | 0.5641 | 8.0 | 40 | 1.0671 | 0.6 | | 0.5 | 9.0 | 45 | 1.0282 | 0.6 | | 0.3882 | 10.0 | 50 | 1.0500 | 0.6 | | 0.3522 | 11.0 | 55 | 1.1381 | 0.6 | | 0.2492 | 12.0 | 60 | 1.1278 | 0.6 | | 0.2063 | 13.0 | 65 | 1.0731 | 0.6 | | 0.1608 | 14.0 | 70 | 1.1339 | 0.6 | | 0.1448 | 15.0 | 75 | 1.1892 | 0.6 | | 0.0925 | 16.0 | 80 | 1.1840 | 0.6 | | 0.0768 | 17.0 | 85 | 1.0608 | 0.6 | | 0.0585 | 18.0 | 90 | 1.1073 | 0.6 | | 0.0592 | 19.0 | 95 | 1.3134 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-7
SetFit
2022-02-10T07:45:58Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-8-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-8-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1206 - Accuracy: 0.0555 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1186 | 1.0 | 5 | 1.1631 | 0.0 | | 1.058 | 2.0 | 10 | 1.1986 | 0.0 | | 1.081 | 3.0 | 15 | 1.2111 | 0.0 | | 1.0118 | 4.0 | 20 | 1.2373 | 0.0 | | 0.9404 | 5.0 | 25 | 1.2645 | 0.0 | | 0.9146 | 6.0 | 30 | 1.3258 | 0.0 | | 0.8285 | 7.0 | 35 | 1.3789 | 0.0 | | 0.6422 | 8.0 | 40 | 1.3783 | 0.0 | | 0.6156 | 9.0 | 45 | 1.3691 | 0.0 | | 0.5321 | 10.0 | 50 | 1.3693 | 0.0 | | 0.4504 | 11.0 | 55 | 1.4000 | 0.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-6
SetFit
2022-02-10T07:45:05Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-8-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-8-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1275 - Accuracy: 0.3795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.11 | 1.0 | 5 | 1.1184 | 0.0 | | 1.0608 | 2.0 | 10 | 1.1227 | 0.0 | | 1.0484 | 3.0 | 15 | 1.1009 | 0.2 | | 0.9614 | 4.0 | 20 | 1.1009 | 0.2 | | 0.8545 | 5.0 | 25 | 1.0772 | 0.2 | | 0.8241 | 6.0 | 30 | 1.0457 | 0.2 | | 0.708 | 7.0 | 35 | 1.0301 | 0.4 | | 0.5045 | 8.0 | 40 | 1.0325 | 0.4 | | 0.4175 | 9.0 | 45 | 1.0051 | 0.4 | | 0.3446 | 10.0 | 50 | 0.9610 | 0.4 | | 0.2851 | 11.0 | 55 | 0.9954 | 0.4 | | 0.1808 | 12.0 | 60 | 1.0561 | 0.4 | | 0.1435 | 13.0 | 65 | 1.0218 | 0.4 | | 0.1019 | 14.0 | 70 | 1.0254 | 0.4 | | 0.0908 | 15.0 | 75 | 0.9935 | 0.4 | | 0.0591 | 16.0 | 80 | 1.0090 | 0.4 | | 0.0512 | 17.0 | 85 | 1.0884 | 0.4 | | 0.0397 | 18.0 | 90 | 1.2732 | 0.4 | | 0.039 | 19.0 | 95 | 1.2979 | 0.6 | | 0.0325 | 20.0 | 100 | 1.2705 | 0.4 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3
SetFit
2022-02-10T07:42:05Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__hate_speech_offensive__train-8-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-8-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9681 - Accuracy: 0.549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1073 | 1.0 | 5 | 1.1393 | 0.0 | | 1.0392 | 2.0 | 10 | 1.1729 | 0.0 | | 1.0302 | 3.0 | 15 | 1.1694 | 0.2 | | 0.9176 | 4.0 | 20 | 1.1846 | 0.2 | | 0.8339 | 5.0 | 25 | 1.1663 | 0.2 | | 0.7533 | 6.0 | 30 | 1.1513 | 0.4 | | 0.6327 | 7.0 | 35 | 1.1474 | 0.4 | | 0.4402 | 8.0 | 40 | 1.1385 | 0.4 | | 0.3752 | 9.0 | 45 | 1.0965 | 0.2 | | 0.3448 | 10.0 | 50 | 1.0357 | 0.2 | | 0.2582 | 11.0 | 55 | 1.0438 | 0.2 | | 0.1903 | 12.0 | 60 | 1.0561 | 0.2 | | 0.1479 | 13.0 | 65 | 1.0569 | 0.2 | | 0.1129 | 14.0 | 70 | 1.0455 | 0.2 | | 0.1071 | 15.0 | 75 | 1.0416 | 0.4 | | 0.0672 | 16.0 | 80 | 1.1164 | 0.4 | | 0.0561 | 17.0 | 85 | 1.1846 | 0.6 | | 0.0463 | 18.0 | 90 | 1.2040 | 0.6 | | 0.0431 | 19.0 | 95 | 1.2078 | 0.6 | | 0.0314 | 20.0 | 100 | 1.2368 | 0.6 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-32-9
SetFit
2022-02-10T07:36:28Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-32-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-32-9 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5625 - Accuracy: 0.7353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7057 | 1.0 | 13 | 0.6805 | 0.5385 | | 0.6642 | 2.0 | 26 | 0.6526 | 0.7692 | | 0.5869 | 3.0 | 39 | 0.5773 | 0.8462 | | 0.4085 | 4.0 | 52 | 0.4959 | 0.8462 | | 0.2181 | 5.0 | 65 | 0.4902 | 0.6923 | | 0.069 | 6.0 | 78 | 0.5065 | 0.8462 | | 0.0522 | 7.0 | 91 | 0.6082 | 0.7692 | | 0.0135 | 8.0 | 104 | 0.6924 | 0.7692 | | 0.0084 | 9.0 | 117 | 0.5921 | 0.7692 | | 0.0061 | 10.0 | 130 | 0.6477 | 0.7692 | | 0.0047 | 11.0 | 143 | 0.6648 | 0.7692 | | 0.0035 | 12.0 | 156 | 0.6640 | 0.7692 | | 0.0031 | 13.0 | 169 | 0.6615 | 0.7692 | | 0.0029 | 14.0 | 182 | 0.6605 | 0.7692 | | 0.0026 | 15.0 | 195 | 0.6538 | 0.8462 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-32-3
SetFit
2022-02-10T07:31:06Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-32-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-32-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5694 - Accuracy: 0.7073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7118 | 1.0 | 13 | 0.6844 | 0.5385 | | 0.6587 | 2.0 | 26 | 0.6707 | 0.6154 | | 0.6067 | 3.0 | 39 | 0.6295 | 0.5385 | | 0.4714 | 4.0 | 52 | 0.5811 | 0.6923 | | 0.2444 | 5.0 | 65 | 0.5932 | 0.7692 | | 0.1007 | 6.0 | 78 | 0.7386 | 0.6923 | | 0.0332 | 7.0 | 91 | 0.6962 | 0.6154 | | 0.0147 | 8.0 | 104 | 0.8200 | 0.7692 | | 0.0083 | 9.0 | 117 | 0.9250 | 0.7692 | | 0.0066 | 10.0 | 130 | 0.9345 | 0.7692 | | 0.005 | 11.0 | 143 | 0.9313 | 0.7692 | | 0.0036 | 12.0 | 156 | 0.9356 | 0.7692 | | 0.0031 | 13.0 | 169 | 0.9395 | 0.7692 | | 0.0029 | 14.0 | 182 | 0.9504 | 0.7692 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-32-1
SetFit
2022-02-10T07:29:19Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-32-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-32-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6492 - Accuracy: 0.6551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7106 | 1.0 | 13 | 0.6850 | 0.6154 | | 0.631 | 2.0 | 26 | 0.6632 | 0.6923 | | 0.5643 | 3.0 | 39 | 0.6247 | 0.7692 | | 0.3992 | 4.0 | 52 | 0.5948 | 0.7692 | | 0.1928 | 5.0 | 65 | 0.5803 | 0.7692 | | 0.0821 | 6.0 | 78 | 0.6404 | 0.6923 | | 0.0294 | 7.0 | 91 | 0.7387 | 0.6923 | | 0.0141 | 8.0 | 104 | 0.8270 | 0.6923 | | 0.0082 | 9.0 | 117 | 0.8496 | 0.6923 | | 0.0064 | 10.0 | 130 | 0.8679 | 0.6923 | | 0.005 | 11.0 | 143 | 0.8914 | 0.6923 | | 0.0036 | 12.0 | 156 | 0.9278 | 0.6923 | | 0.0031 | 13.0 | 169 | 0.9552 | 0.6923 | | 0.0029 | 14.0 | 182 | 0.9745 | 0.6923 | | 0.0028 | 15.0 | 195 | 0.9785 | 0.6923 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-7
SetFit
2022-02-10T07:25:33Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6952 - Accuracy: 0.5025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6949 | 1.0 | 7 | 0.7252 | 0.2857 | | 0.6678 | 2.0 | 14 | 0.7550 | 0.2857 | | 0.6299 | 3.0 | 21 | 0.8004 | 0.2857 | | 0.5596 | 4.0 | 28 | 0.8508 | 0.2857 | | 0.5667 | 5.0 | 35 | 0.8464 | 0.2857 | | 0.367 | 6.0 | 42 | 0.8515 | 0.2857 | | 0.2706 | 7.0 | 49 | 0.9574 | 0.2857 | | 0.2163 | 8.0 | 56 | 0.9710 | 0.4286 | | 0.1024 | 9.0 | 63 | 1.1607 | 0.1429 | | 0.1046 | 10.0 | 70 | 1.3779 | 0.1429 | | 0.0483 | 11.0 | 77 | 1.4876 | 0.1429 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-6
SetFit
2022-02-10T07:24:39Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8356 - Accuracy: 0.6480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6978 | 1.0 | 7 | 0.6807 | 0.4286 | | 0.6482 | 2.0 | 14 | 0.6775 | 0.4286 | | 0.6051 | 3.0 | 21 | 0.6623 | 0.5714 | | 0.486 | 4.0 | 28 | 0.6710 | 0.5714 | | 0.4612 | 5.0 | 35 | 0.5325 | 0.7143 | | 0.2233 | 6.0 | 42 | 0.4992 | 0.7143 | | 0.1328 | 7.0 | 49 | 0.4753 | 0.7143 | | 0.0905 | 8.0 | 56 | 0.2416 | 1.0 | | 0.0413 | 9.0 | 63 | 0.2079 | 1.0 | | 0.0356 | 10.0 | 70 | 0.2234 | 0.8571 | | 0.0217 | 11.0 | 77 | 0.2639 | 0.8571 | | 0.0121 | 12.0 | 84 | 0.2977 | 0.8571 | | 0.0105 | 13.0 | 91 | 0.3468 | 0.8571 | | 0.0085 | 14.0 | 98 | 0.3912 | 0.8571 | | 0.0077 | 15.0 | 105 | 0.4000 | 0.8571 | | 0.0071 | 16.0 | 112 | 0.4015 | 0.8571 | | 0.0078 | 17.0 | 119 | 0.3865 | 0.8571 | | 0.0059 | 18.0 | 126 | 0.3603 | 0.8571 | | 0.0051 | 19.0 | 133 | 0.3231 | 0.8571 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-5
SetFit
2022-02-10T07:23:42Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6537 - Accuracy: 0.6332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6925 | 1.0 | 7 | 0.6966 | 0.2857 | | 0.6703 | 2.0 | 14 | 0.7045 | 0.2857 | | 0.6404 | 3.0 | 21 | 0.7205 | 0.2857 | | 0.555 | 4.0 | 28 | 0.7548 | 0.2857 | | 0.5179 | 5.0 | 35 | 0.6745 | 0.5714 | | 0.3038 | 6.0 | 42 | 0.7260 | 0.5714 | | 0.2089 | 7.0 | 49 | 0.8016 | 0.5714 | | 0.1303 | 8.0 | 56 | 0.8202 | 0.5714 | | 0.0899 | 9.0 | 63 | 0.9966 | 0.5714 | | 0.0552 | 10.0 | 70 | 1.1887 | 0.5714 | | 0.0333 | 11.0 | 77 | 1.2163 | 0.5714 | | 0.0169 | 12.0 | 84 | 1.2874 | 0.5714 | | 0.0136 | 13.0 | 91 | 1.3598 | 0.5714 | | 0.0103 | 14.0 | 98 | 1.4237 | 0.5714 | | 0.0089 | 15.0 | 105 | 1.4758 | 0.5714 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-4
SetFit
2022-02-10T07:22:51Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1501 - Accuracy: 0.6387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 | | 0.68 | 2.0 | 14 | 0.7398 | 0.2857 | | 0.641 | 3.0 | 21 | 0.7723 | 0.2857 | | 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 | | 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 | | 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 | | 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 | | 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 | | 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 | | 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 | | 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 | | 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 | | 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 | | 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 | | 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 | | 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 | | 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 | | 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 | | 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 | | 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 | | 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 | | 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 | | 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 | | 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 | | 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 | | 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 | | 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 | | 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 | | 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 | | 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
csikasote/wav2vec2-large-xls-r-300m-bemba-fds
csikasote
2022-02-10T07:21:29Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "bem", "robust-speech-event", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer - bem - robust-speech-event model-index: - name: wav2vec2-large-xls-r-300m-bemba-fds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bemba-fds This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [BembaSpeech](https://github.com/csikasote/BembaSpeech) dataset. It achieves the following results on the evaluation set: - Loss: 0.3594 - Wer: 0.3838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9961 | 0.67 | 500 | 0.5157 | 0.7133 | | 0.5903 | 1.34 | 1000 | 0.3663 | 0.4989 | | 0.4804 | 2.02 | 1500 | 0.3547 | 0.4653 | | 0.4146 | 2.69 | 2000 | 0.3274 | 0.4345 | | 0.3792 | 3.36 | 2500 | 0.3586 | 0.4640 | | 0.3509 | 4.03 | 3000 | 0.3360 | 0.4316 | | 0.3114 | 4.7 | 3500 | 0.3382 | 0.4303 | | 0.2935 | 5.38 | 4000 | 0.3263 | 0.4091 | | 0.2723 | 6.05 | 4500 | 0.3348 | 0.4175 | | 0.2502 | 6.72 | 5000 | 0.3317 | 0.4147 | | 0.2334 | 7.39 | 5500 | 0.3542 | 0.4030 | | 0.2287 | 8.06 | 6000 | 0.3594 | 0.4067 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-1
SetFit
2022-02-10T07:19:37Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6012 - Accuracy: 0.6766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6983 | 1.0 | 7 | 0.7036 | 0.2857 | | 0.6836 | 2.0 | 14 | 0.7181 | 0.2857 | | 0.645 | 3.0 | 21 | 0.7381 | 0.2857 | | 0.5902 | 4.0 | 28 | 0.7746 | 0.2857 | | 0.5799 | 5.0 | 35 | 0.7242 | 0.5714 | | 0.3584 | 6.0 | 42 | 0.6935 | 0.5714 | | 0.2596 | 7.0 | 49 | 0.7041 | 0.5714 | | 0.1815 | 8.0 | 56 | 0.5930 | 0.7143 | | 0.0827 | 9.0 | 63 | 0.6976 | 0.7143 | | 0.0613 | 10.0 | 70 | 0.7346 | 0.7143 | | 0.0356 | 11.0 | 77 | 0.6992 | 0.5714 | | 0.0158 | 12.0 | 84 | 0.7328 | 0.5714 | | 0.013 | 13.0 | 91 | 0.7819 | 0.5714 | | 0.0103 | 14.0 | 98 | 0.8589 | 0.5714 | | 0.0087 | 15.0 | 105 | 0.9177 | 0.5714 | | 0.0076 | 16.0 | 112 | 0.9519 | 0.5714 | | 0.0078 | 17.0 | 119 | 0.9556 | 0.5714 | | 0.006 | 18.0 | 126 | 0.9542 | 0.5714 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-16-0
SetFit
2022-02-10T07:18:41Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-16-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6903 - Accuracy: 0.5091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6934 | 1.0 | 7 | 0.7142 | 0.2857 | | 0.6703 | 2.0 | 14 | 0.7379 | 0.2857 | | 0.6282 | 3.0 | 21 | 0.7769 | 0.2857 | | 0.5193 | 4.0 | 28 | 0.8799 | 0.2857 | | 0.5104 | 5.0 | 35 | 0.8380 | 0.4286 | | 0.2504 | 6.0 | 42 | 0.8622 | 0.4286 | | 0.1794 | 7.0 | 49 | 0.9227 | 0.4286 | | 0.1156 | 8.0 | 56 | 0.8479 | 0.4286 | | 0.0709 | 9.0 | 63 | 1.0929 | 0.2857 | | 0.0471 | 10.0 | 70 | 1.2189 | 0.2857 | | 0.0288 | 11.0 | 77 | 1.2026 | 0.4286 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
SetFit/distilbert-base-uncased__sst2__train-8-6
SetFit
2022-02-10T07:14:49Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased__sst2__train-8-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-8-6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5336 - Accuracy: 0.7523 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7161 | 1.0 | 3 | 0.6941 | 0.5 | | 0.6786 | 2.0 | 6 | 0.7039 | 0.25 | | 0.6586 | 3.0 | 9 | 0.7090 | 0.25 | | 0.6121 | 4.0 | 12 | 0.7183 | 0.25 | | 0.5696 | 5.0 | 15 | 0.7266 | 0.25 | | 0.522 | 6.0 | 18 | 0.7305 | 0.25 | | 0.4899 | 7.0 | 21 | 0.7339 | 0.25 | | 0.3985 | 8.0 | 24 | 0.7429 | 0.25 | | 0.3758 | 9.0 | 27 | 0.7224 | 0.25 | | 0.2876 | 10.0 | 30 | 0.7068 | 0.5 | | 0.2498 | 11.0 | 33 | 0.6751 | 0.75 | | 0.1921 | 12.0 | 36 | 0.6487 | 0.75 | | 0.1491 | 13.0 | 39 | 0.6261 | 0.75 | | 0.1276 | 14.0 | 42 | 0.6102 | 0.75 | | 0.0996 | 15.0 | 45 | 0.5964 | 0.75 | | 0.073 | 16.0 | 48 | 0.6019 | 0.75 | | 0.0627 | 17.0 | 51 | 0.5933 | 0.75 | | 0.053 | 18.0 | 54 | 0.5768 | 0.75 | | 0.0403 | 19.0 | 57 | 0.5698 | 0.75 | | 0.0328 | 20.0 | 60 | 0.5656 | 0.75 | | 0.03 | 21.0 | 63 | 0.5634 | 0.75 | | 0.025 | 22.0 | 66 | 0.5620 | 0.75 | | 0.0209 | 23.0 | 69 | 0.5623 | 0.75 | | 0.0214 | 24.0 | 72 | 0.5606 | 0.75 | | 0.0191 | 25.0 | 75 | 0.5565 | 0.75 | | 0.0173 | 26.0 | 78 | 0.5485 | 0.75 | | 0.0175 | 27.0 | 81 | 0.5397 | 0.75 | | 0.0132 | 28.0 | 84 | 0.5322 | 0.75 | | 0.0138 | 29.0 | 87 | 0.5241 | 0.75 | | 0.0128 | 30.0 | 90 | 0.5235 | 0.75 | | 0.0126 | 31.0 | 93 | 0.5253 | 0.75 | | 0.012 | 32.0 | 96 | 0.5317 | 0.75 | | 0.0118 | 33.0 | 99 | 0.5342 | 0.75 | | 0.0092 | 34.0 | 102 | 0.5388 | 0.75 | | 0.0117 | 35.0 | 105 | 0.5414 | 0.75 | | 0.0124 | 36.0 | 108 | 0.5453 | 0.75 | | 0.0109 | 37.0 | 111 | 0.5506 | 0.75 | | 0.0112 | 38.0 | 114 | 0.5555 | 0.75 | | 0.0087 | 39.0 | 117 | 0.5597 | 0.75 | | 0.01 | 40.0 | 120 | 0.5640 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3