modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 06:27:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 06:26:36
card
stringlengths
11
1.01M
msavel-prnt/distilbert-base-uncased-finetuned-clinc
msavel-prnt
2022-01-05T15:37:05Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model_index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metric: name: Accuracy type: accuracy value: 0.9180645161290323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7528 - Accuracy: 0.9181 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.3044 | 0.7623 | | 3.7959 | 2.0 | 636 | 1.8674 | 0.8597 | | 3.7959 | 3.0 | 954 | 1.1377 | 0.8948 | | 1.6819 | 4.0 | 1272 | 0.8351 | 0.9126 | | 0.8804 | 5.0 | 1590 | 0.7528 | 0.9181 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.9.0+cu102 - Datasets 1.9.0 - Tokenizers 0.10.3
tadejmagajna/flair-sl-pos
tadejmagajna
2022-01-05T15:07:06Z
2
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "sl", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: sl widget: - text: "Danes je lep dan." --- ## Slovene Part-of-speech (PoS) Tagging for Flair This is a Slovene part-of-speech (PoS) tagger trained on the [Slovenian UD Treebank](https://github.com/UniversalDependencies/UD_Slovenian-SSJ) using Flair NLP framework. The tagger is trained using a combination of forward Slovene contextual string embeddings, backward Slovene contextual string embeddings and classic Slovene FastText embeddings. F-score (micro): **94,96** The model is trained on a large (500+) number of different tags that described at [https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html](https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html). Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("tadejmagajna/flair-sl-pos") # make example sentence sentence = Sentence("Danes je lep dan.") # predict PoS tags tagger.predict(sentence) # print sentence print(sentence) # print predicted PoS spans print('The following PoS tags are found:') # iterate over parts of speech and print for tag in sentence.get_spans('pos'): print(tag) ``` This prints out the following output: ``` Sentence: "Danes je lep dan ." [− Tokens: 5 − Token-Labels: "Danes <Rgp> je <Va-r3s-n> lep <Agpmsnn> dan <Ncmsn> . <Z>"] The following PoS tags are found: Span [1]: "Danes" [− Labels: Rgp (1.0)] Span [2]: "je" [− Labels: Va-r3s-n (1.0)] Span [3]: "lep" [− Labels: Agpmsnn (0.9999)] Span [4]: "dan" [− Labels: Ncmsn (1.0)] Span [5]: "." [− Labels: Z (1.0)] ``` --- ### Training: Script to train this model The following standard Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import UD_SLOVENIAN from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = UD_SLOVENIAN() # 2. what tag do we want to predict? tag_type = 'pos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize embeddings embedding_types = [ WordEmbeddings('sl'), FlairEmbeddings('sl-forward'), FlairEmbeddings('sl-backward'), ] embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger: SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) # 7. start training trainer.train('resources/taggers/pos-slovene', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Icelandic-lt/electra-small-igc-is
Icelandic-lt
2022-01-05T14:56:02Z
5
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "is", "dataset:igc", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-27T11:38:14Z
--- language: - is license: cc-by-4.0 datasets: - igc --- # Icelandic ELECTRA-Small This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
jonfd/electra-small-igc-is
jonfd
2022-01-05T14:56:02Z
47
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "is", "dataset:igc", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - is license: cc-by-4.0 datasets: - igc --- # Icelandic ELECTRA-Small This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
jonfd/electra-base-igc-is
jonfd
2022-01-05T14:54:23Z
10
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "is", "dataset:igc", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - is license: cc-by-4.0 datasets: - igc --- # Icelandic ELECTRA-Base This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
kurone/cp_tags_prediction
kurone
2022-01-05T13:32:49Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
This model can predict which categories a specific competitive problem falls into
huggingtweets/sporeball
huggingtweets
2022-01-05T08:02:01Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sporeball/1641369716297/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1365405536401776642/Z17NbuYy_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">lux</div> <div style="text-align: center; font-size: 14px;">@sporeball</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from lux. | Data | lux | | --- | --- | | Tweets downloaded | 1150 | | Retweets | 171 | | Short tweets | 120 | | Tweets kept | 859 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w9y6gn1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sporeball's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sporeball') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Prasadi/wav2vec2-base-timit-demo-colab-1
Prasadi
2022-01-05T06:18:01Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab-1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3857 - Wer: 0.3874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4285 | 2.01 | 500 | 1.4732 | 0.9905 | | 0.7457 | 4.02 | 1000 | 0.5278 | 0.4960 | | 0.3463 | 6.02 | 1500 | 0.4245 | 0.4155 | | 0.2034 | 8.03 | 2000 | 0.3857 | 0.3874 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
rdpatilds/con-nlu
rdpatilds
2022-01-05T05:31:42Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: con-nlu results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # con-nlu This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
abdelkader/distilbert-base-uncased-finetuned-emotion
abdelkader
2022-01-04T23:18:05Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9215604730468001 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2162 - Accuracy: 0.9215 - F1: 0.9216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8007 | 1.0 | 250 | 0.3082 | 0.907 | 0.9045 | | 0.2438 | 2.0 | 500 | 0.2162 | 0.9215 | 0.9216 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mbateman/bert-finetuned-ner
mbateman
2022-01-04T20:30:26Z
6
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9333553828344634 - name: Recall type: recall value: 0.9498485358465163 - name: F1 type: f1 value: 0.9415297355909584 - name: Accuracy type: accuracy value: 0.9868281627126626 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0622 - Precision: 0.9334 - Recall: 0.9498 - F1: 0.9415 - Accuracy: 0.9868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0881 | 1.0 | 1756 | 0.0683 | 0.9136 | 0.9322 | 0.9228 | 0.9826 | | 0.0383 | 2.0 | 3512 | 0.0641 | 0.9277 | 0.9456 | 0.9366 | 0.9854 | | 0.0229 | 3.0 | 5268 | 0.0622 | 0.9334 | 0.9498 | 0.9415 | 0.9868 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.1
huawei-noah/JABER
huawei-noah
2022-01-04T20:19:57Z
1
0
null
[ "pytorch", "arxiv:2112.04329", "region:us" ]
null
2022-03-02T23:29:05Z
# Overview <p align="center"> <img src="https://avatars.githubusercontent.com/u/12619994?s=200&v=4" width="150"> </p> <!-- -------------------------------------------------------------------------------- --> JABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model. JABER obtained rank one on [ALUE leaderboard](https://www.alue.org/leaderboard) at `01/09/2021`. This model is **only compatible** with the code in [this github repo](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/JABER-PyTorch) (not supported by the [Transformers](https://github.com/huggingface/transformers) library) ## Citation Please cite the following [paper](https://arxiv.org/abs/2112.04329) when using our code and model: ``` bibtex @misc{ghaddar2021jaber, title={JABER: Junior Arabic BERt}, author={Abbas Ghaddar and Yimeng Wu and Ahmad Rashid and Khalil Bibi and Mehdi Rezagholizadeh and Chao Xing and Yasheng Wang and Duan Xinyu and Zhefeng Wang and Baoxing Huai and Xin Jiang and Qun Liu and Philippe Langlais}, year={2021}, eprint={2112.04329}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ncoop57/athena
ncoop57
2022-01-04T19:24:11Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "longformer", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # ncoop57/athena This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('ncoop57/athena') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ncoop57/athena) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 50 with parameters: ``` {'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: LongformerModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/darth
huggingtweets
2022-01-04T19:21:54Z
106
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/darth/1641324110436/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1459410046169731073/gRiEO9Yd_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">darth™</div> <div style="text-align: center; font-size: 14px;">@darth</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from darth™. | Data | darth™ | | --- | --- | | Tweets downloaded | 3189 | | Retweets | 1278 | | Short tweets | 677 | | Tweets kept | 1234 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3t5g6hcx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @darth's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/c56rnej9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/c56rnej9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/darth') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Khanh/distilbert-base-multilingual-cased-finetuned-viquad
Khanh
2022-01-04T19:19:15Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-multilingual-cased-finetuned-viquad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-viquad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 65 | 4.0975 | | No log | 2.0 | 130 | 3.9315 | | No log | 3.0 | 195 | 3.6742 | | No log | 4.0 | 260 | 3.4878 | | No log | 5.0 | 325 | 3.4241 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v2
ericRosello
2022-01-04T18:06:41Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2104 ## Model description Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder. ## Training and evaluation data Achieved EM: 73.519394512772, F1: 82.71779517079237 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3937 | 1.0 | 5533 | 1.2915 | | 1.1522 | 2.0 | 11066 | 1.2227 | | 1.0055 | 3.0 | 16599 | 1.2104 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
yosemite/autonlp-imdb-sentiment-analysis-english-470512388
yosemite
2022-01-04T17:34:50Z
18
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:yosemite/autonlp-data-imdb-sentiment-analysis-english", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - yosemite/autonlp-data-imdb-sentiment-analysis-english co2_eq_emissions: 256.38650494338367 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 470512388 - CO2 Emissions (in grams): 256.38650494338367 ## Validation Metrics - Loss: 0.18712733685970306 - Accuracy: 0.9388 - Precision: 0.9300274402195218 - Recall: 0.949 - AUC: 0.98323192 - F1: 0.9394179370421698 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/yosemite/autonlp-imdb-sentiment-analysis-english-470512388 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("yosemite/autonlp-imdb-sentiment-analysis-english-470512388", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
nvidia/megatron-bert-uncased-345m
nvidia
2022-01-04T15:16:39Z
0
7
null
[ "arxiv:1909.08053", "region:us" ]
null
2022-03-02T23:29:05Z
<!--- # ############################################################################################## # # Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ############################################################################################## --> [Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a bidirectional transformer in the style of BERT with text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. This model contains 345 million parameters. It is made up of 24 layers, 16 attention heads with a hidden size of 1024. Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM) # How to run Megatron BERT using Transformers ## Prerequisites In that guide, we run all the commands from a folder called `$MYDIR` and defined as (in `bash`): ``` export MYDIR=$HOME ``` Feel free to change the location at your convenience. To run some of the commands below, you'll have to clone `Transformers`. ``` git clone https://github.com/huggingface/transformers.git $MYDIR/transformers ``` ## Get the checkpoint from the NVIDIA GPU Cloud You must create a directory called `nvidia/megatron-bert-uncased-345m`. ``` mkdir -p $MYDIR/nvidia/megatron-bert-uncased-345m ``` You can download the checkpoint from the [NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m). For that you have to [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). Alternatively, you can directly download the checkpoint using: ``` wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip ``` ## Converting the checkpoint In order to be loaded into `Transformers`, the checkpoint has to be converted. You should run the following commands for that purpose. Those commands will create `config.json` and `pytorch_model.bin` in `$MYDIR/nvidia/megatron-bert-{cased,uncased}-345m`. You can move those files to different directories if needed. ``` python3 $MYDIR/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py $MYDIR/nvidia/megatron-bert-uncased-345m/checkpoint.zip ``` As explained in [PR #14956](https://github.com/huggingface/transformers/pull/14956), if when running this conversion script and you're getting an exception: ``` ModuleNotFoundError: No module named 'megatron.model.enums' ``` you need to tell python where to find the clone of Megatron-LM, e.g.: ``` cd /tmp git clone https://github.com/NVIDIA/Megatron-LM PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py ... ``` Or, if you already have it cloned elsewhere, simply adjust the path to the existing path. If the training was done using a Megatron-LM fork, e.g. [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/) then you may need to have that one in your path, i.e., /path/to/Megatron-DeepSpeed. ## Masked LM The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform a `Masked LM` task. ``` import os import torch from transformers import BertTokenizer, MegatronBertForMaskedLM # The tokenizer. Megatron was trained with standard tokenizer(s). tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-uncased-345m') # The path to the config/checkpoint (see the conversion step above). directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-uncased-345m') # Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m. model = MegatronBertForMaskedLM.from_pretrained(directory) # Copy to the device and use FP16. assert torch.cuda.is_available() device = torch.device("cuda") model.to(device) model.eval() model.half() # Create inputs (from the BERT example page). input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device) label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device) # Run the model. with torch.no_grad(): output = model(**input, labels=label) print(output) ``` ## Next sentence prediction The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform next sentence prediction. ``` import os import torch from transformers import BertTokenizer, MegatronBertForNextSentencePrediction # The tokenizer. Megatron was trained with standard tokenizer(s). tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-uncased-345m') # The path to the config/checkpoint (see the conversion step above). directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-uncased-345m') # Load the model from $MYDIR/nvidia/megatron-bert-uncased-345m. model = MegatronBertForNextSentencePrediction.from_pretrained(directory) # Copy to the device and use FP16. assert torch.cuda.is_available() device = torch.device("cuda") model.to(device) model.eval() model.half() # Create inputs (from the BERT example page). input = tokenizer('In Italy, pizza served in formal settings is presented unsliced.', 'The sky is blue due to the shorter wavelength of blue light.', return_tensors='pt').to(device) label = torch.LongTensor([1]).to(device) # Run the model. with torch.no_grad(): output = model(**input, labels=label) print(output) ``` # Original code The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
nvidia/megatron-bert-cased-345m
nvidia
2022-01-04T15:15:44Z
0
4
null
[ "arxiv:1909.08053", "region:us" ]
null
2022-03-02T23:29:05Z
<!--- # ############################################################################################## # # Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ############################################################################################## --> [Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a bidirectional transformer in the style of BERT with text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. This model contains 345 million parameters. It is made up of 24 layers, 16 attention heads with a hidden size of 1024. Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM) # How to run Megatron BERT using Transformers ## Prerequisites In that guide, we run all the commands from a folder called `$MYDIR` and defined as (in `bash`): ``` export MYDIR=$HOME ``` Feel free to change the location at your convenience. To run some of the commands below, you'll have to clone `Transformers`. ``` git clone https://github.com/huggingface/transformers.git $MYDIR/transformers ``` ## Get the checkpoint from the NVIDIA GPU Cloud You must create a directory called `nvidia/megatron-bert-cased-345m`. ``` mkdir -p $MYDIR/nvidia/megatron-bert-cased-345m ``` You can download the checkpoint from the [NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com/catalog/models/nvidia:megatron_bert_345m). For that you have to [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the [NGC documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1). Alternatively, you can directly download the checkpoint using: ``` wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O $MYDIR/nvidia/megatron-bert-cased-345m/checkpoint.zip ``` ## Converting the checkpoint In order to be loaded into `Transformers`, the checkpoint has to be converted. You should run the following commands for that purpose. Those commands will create `config.json` and `pytorch_model.bin` in `$MYDIR/nvidia/megatron-bert-cased-345m`. You can move those files to different directories if needed. ``` python3 $MYDIR/transformers/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py $MYDIR/nvidia/megatron-bert-cased-345m/checkpoint.zip ``` As explained in [PR #14956](https://github.com/huggingface/transformers/pull/14956), if when running this conversion script and you're getting an exception: ``` ModuleNotFoundError: No module named 'megatron.model.enums' ``` you need to tell python where to find the clone of Megatron-LM, e.g.: ``` cd /tmp git clone https://github.com/NVIDIA/Megatron-LM PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py ... ``` Or, if you already have it cloned elsewhere, simply adjust the path to the existing path. If the training was done using a Megatron-LM fork, e.g. [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/) then you may need to have that one in your path, i.e., /path/to/Megatron-DeepSpeed. ## Masked LM The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform a `Masked LM` task. ``` import os import torch from transformers import BertTokenizer, MegatronBertForMaskedLM # The tokenizer. Megatron was trained with standard tokenizer(s). tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-cased-345m') # The path to the config/checkpoint (see the conversion step above). directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-cased-345m') # Load the model from $MYDIR/nvidia/megatron-bert-cased-345m. model = MegatronBertForMaskedLM.from_pretrained(directory) # Copy to the device and use FP16. assert torch.cuda.is_available() device = torch.device("cuda") model.to(device) model.eval() model.half() # Create inputs (from the BERT example page). input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device) label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device) # Run the model. with torch.no_grad(): output = model(**input, labels=label) print(output) ``` ## Next sentence prediction The following code shows how to use the Megatron BERT checkpoint and the Transformers API to perform next sentence prediction. ``` import os import torch from transformers import BertTokenizer, MegatronBertForNextSentencePrediction # The tokenizer. Megatron was trained with standard tokenizer(s). tokenizer = BertTokenizer.from_pretrained('nvidia/megatron-bert-cased-345m') # The path to the config/checkpoint (see the conversion step above). directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-bert-cased-345m') # Load the model from $MYDIR/nvidia/megatron-bert-cased-345m. model = MegatronBertForNextSentencePrediction.from_pretrained(directory) # Copy to the device and use FP16. assert torch.cuda.is_available() device = torch.device("cuda") model.to(device) model.eval() model.half() # Create inputs (from the BERT example page). input = tokenizer('In Italy, pizza served in formal settings is presented unsliced.', 'The sky is blue due to the shorter wavelength of blue light.', return_tensors='pt').to(device) label = torch.LongTensor([1]).to(device) # Run the model. with torch.no_grad(): output = model(**input, labels=label) print(output) ``` # Original code The original code for Megatron can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
scasutt/Prototype_training
scasutt
2022-01-04T14:59:34Z
13
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Prototype_training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Prototype_training This model is a fine-tuned version of [scasutt/Prototype_training](https://huggingface.co/scasutt/Prototype_training) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3719 - Wer: 0.4626 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3853 | 1.47 | 100 | 0.3719 | 0.4626 | | 0.3867 | 2.94 | 200 | 0.3719 | 0.4626 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Khanh/bert-base-multilingual-cased-finetuned-squad
Khanh
2022-01-04T14:51:33Z
54
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-multilingual-cased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1782 | 1.0 | 579 | 0.5258 | | 0.4938 | 2.0 | 1158 | 0.4639 | | 0.32 | 3.0 | 1737 | 0.4919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
sshasnain/wav2vec2-xls-r-timit-trainer
sshasnain
2022-01-04T14:49:41Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-timit-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-timit-trainer This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1064 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5537 | 4.03 | 500 | 0.6078 | 1.0 | | 0.5444 | 8.06 | 1000 | 0.4990 | 0.9994 | | 0.3744 | 12.1 | 1500 | 0.5530 | 1.0 | | 0.2863 | 16.13 | 2000 | 0.6401 | 1.0 | | 0.2357 | 20.16 | 2500 | 0.6485 | 1.0 | | 0.1933 | 24.19 | 3000 | 0.7448 | 0.9994 | | 0.162 | 28.22 | 3500 | 0.7502 | 1.0 | | 0.1325 | 32.26 | 4000 | 0.7801 | 1.0 | | 0.1169 | 36.29 | 4500 | 0.8334 | 1.0 | | 0.1031 | 40.32 | 5000 | 0.8269 | 1.0 | | 0.0913 | 44.35 | 5500 | 0.8432 | 1.0 | | 0.0793 | 48.39 | 6000 | 0.8738 | 1.0 | | 0.0694 | 52.42 | 6500 | 0.8897 | 1.0 | | 0.0613 | 56.45 | 7000 | 0.8966 | 1.0 | | 0.0548 | 60.48 | 7500 | 0.9398 | 1.0 | | 0.0444 | 64.51 | 8000 | 0.9548 | 1.0 | | 0.0386 | 68.55 | 8500 | 0.9647 | 1.0 | | 0.0359 | 72.58 | 9000 | 0.9901 | 1.0 | | 0.0299 | 76.61 | 9500 | 1.0151 | 1.0 | | 0.0259 | 80.64 | 10000 | 1.0526 | 1.0 | | 0.022 | 84.67 | 10500 | 1.0754 | 1.0 | | 0.0189 | 88.71 | 11000 | 1.0688 | 1.0 | | 0.0161 | 92.74 | 11500 | 1.0914 | 1.0 | | 0.0138 | 96.77 | 12000 | 1.1064 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Bhuvana/t5-base-spellchecker
Bhuvana
2022-01-04T12:46:55Z
192
13
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- widget: - text: "christmas is celbrated on decembr 25 evry ear" --- # Spell checker using T5 base transformer A simple spell checker built using T5-Base transformer. To use this model ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Bhuvana/t5-base-spellchecker") model = AutoModelForSeq2SeqLM.from_pretrained("Bhuvana/t5-base-spellchecker") def correct(inputs): input_ids = tokenizer.encode(inputs,return_tensors='pt') sample_output = model.generate( input_ids, do_sample=True, max_length=50, top_p=0.99, num_return_sequences=1 ) res = tokenizer.decode(sample_output[0], skip_special_tokens=True) return res text = "christmas is celbrated on decembr 25 evry ear" print(correct(text)) ``` This should print the corrected statement ``` christmas is celebrated on december 25 every year ``` You can also type the text under the Hosted inference API and get predictions online.
junnyu/roformer_chinese_char_base
junnyu
2022-01-04T11:45:40Z
5
0
paddlenlp
[ "paddlenlp", "pytorch", "tf", "jax", "paddlepaddle", "roformer", "tf2.0", "zh", "arxiv:2104.09864", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: zh tags: - roformer - pytorch - tf2.0 widget: - text: "今天[MASK]很好,我想去公园玩!" --- ## 介绍 ### tf版本 https://github.com/ZhuiyiTechnology/roformer ### pytorch版本+tf2.0版本 https://github.com/JunnYu/RoFormer_pytorch ## pytorch使用 ```python import torch from transformers import RoFormerForMaskedLM, RoFormerTokenizer text = "今天[MASK]很好,我[MASK]去公园玩。" tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base") pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base") pt_inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): pt_outputs = pt_model(**pt_inputs).logits[0] pt_outputs_sentence = "pytorch: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1]) pt_outputs_sentence += "[" + "||".join(tokens) + "]" else: pt_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(pt_outputs_sentence) # pytorch: 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。 ``` ## tensorflow2.0使用 ```python import tensorflow as tf from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM text = "今天[MASK]很好,我[MASK]去公园玩。" tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base") tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base") tf_inputs = tokenizer(text, return_tensors="tf") tf_outputs = tf_model(**tf_inputs, training=False).logits[0] tf_outputs_sentence = "tf2.0: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: tokens = tokenizer.convert_ids_to_tokens( tf.math.top_k(tf_outputs[i], k=5)[1]) tf_outputs_sentence += "[" + "||".join(tokens) + "]" else: tf_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(tf_outputs_sentence) # tf2.0 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。 ``` ## 引用 Bibtex: ```tex @misc{su2021roformer, title={RoFormer: Enhanced Transformer with Rotary Position Embedding}, author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu}, year={2021}, eprint={2104.09864}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
junnyu/roformer_chinese_char_small
junnyu
2022-01-04T11:45:10Z
8
0
transformers
[ "transformers", "pytorch", "tf", "jax", "roformer", "fill-mask", "tf2.0", "zh", "arxiv:2104.09864", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: zh tags: - roformer - pytorch - tf2.0 widget: - text: "今天[MASK]很好,我想去公园玩!" --- ## 介绍 ### tf版本 https://github.com/ZhuiyiTechnology/roformer ### pytorch版本+tf2.0版本 https://github.com/JunnYu/RoFormer_pytorch ## pytorch使用 ```python import torch from transformers import RoFormerForMaskedLM, RoFormerTokenizer text = "今天[MASK]很好,我[MASK]去公园玩。" tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small") pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small") pt_inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): pt_outputs = pt_model(**pt_inputs).logits[0] pt_outputs_sentence = "pytorch: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1]) pt_outputs_sentence += "[" + "||".join(tokens) + "]" else: pt_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(pt_outputs_sentence) # pytorch: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。 ``` ## tensorflow2.0使用 ```python import tensorflow as tf from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM text = "今天[MASK]很好,我[MASK]去公园玩。" tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_small") tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_small") tf_inputs = tokenizer(text, return_tensors="tf") tf_outputs = tf_model(**tf_inputs, training=False).logits[0] tf_outputs_sentence = "tf2.0: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: tokens = tokenizer.convert_ids_to_tokens( tf.math.top_k(tf_outputs[i], k=5)[1]) tf_outputs_sentence += "[" + "||".join(tokens) + "]" else: tf_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(tf_outputs_sentence) # tf2.0: 今天[也||都||又||还||我]很好,我[就||想||去||也||又]去公园玩。 ``` ## 引用 Bibtex: ```tex @misc{su2021roformer, title={RoFormer: Enhanced Transformer with Rotary Position Embedding}, author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu}, year={2021}, eprint={2104.09864}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pierreguillou/bert-base-cased-squad-v1.1-portuguese
pierreguillou
2022-01-04T09:57:53Z
2,742
35
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "question-answering", "bert-base", "pt", "dataset:brWaC", "dataset:squad", "dataset:squad_v1_pt", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - question-answering - bert - bert-base - pytorch datasets: - brWaC - squad - squad_v1_pt metrics: - squad widget: - text: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano." - text: "Onde foi descoberta a Covid-19?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano." --- # Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1 ![Exemple of what can do the Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1](https://miro.medium.com/max/2000/1*te5MmdesAHCmg4KmK8zD3g.png) ## Introduction The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) on Google Colab. The language model used is the [BERTimbau Base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (aka "bert-base-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. ## Informations on the method used All the informations are in the blog post : [NLP | Modelo de Question Answering em qualquer idioma baseado no BERT base (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78) ## Notebooks in Google Colab & GitHub - Google Colab: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://colab.research.google.com/drive/18ueLdi_V321Gz37x4gHq8mb4XZSGWfZx?usp=sharing) - GitHub: [colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/colab_question_answering_BERT_base_cased_squad_v11_pt.ipynb) ## Performance The results obtained are the following: ``` f1 = 82.50 exact match = 70.49 ``` ## How to use the model... with Pipeline ```python import transformers from transformers import pipeline # source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19 context = r""" A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória aguda causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). A doença foi identificada pela primeira vez em Wuhan, na província de Hubei, República Popular da China, em 1 de dezembro de 2019, mas o primeiro caso foi reportado em 31 de dezembro do mesmo ano. Acredita-se que o vírus tenha uma origem zoonótica, porque os primeiros casos confirmados tinham principalmente ligações ao Mercado Atacadista de Frutos do Mar de Huanan, que também vendia animais vivos. Em 11 de março de 2020, a Organização Mundial da Saúde declarou o surto uma pandemia. Até 8 de fevereiro de 2021, pelo menos 105 743 102 casos da doença foram confirmados em pelo menos 191 países e territórios, com cerca de 2 308 943 mortes e 58 851 440 pessoas curadas. """ model_name = 'pierreguillou/bert-base-cased-squad-v1.1-portuguese' nlp = pipeline("question-answering", model=model_name) question = "Quando começou a pandemia de Covid-19 no mundo?" result = nlp(question=question, context=context) print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}") # Answer: '1 de dezembro de 2019', score: 0.713, start: 328, end: 349 ``` ## How to use the model... with the Auto classes ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese") model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-base-cased-squad-v1.1-portuguese") ``` Or just clone the model repo: ```python git lfs install git clone https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese # if you want to clone without large files – just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 ``` ## Limitations and bias The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases. ## Author Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-modelo-de-question-answering-em-qualquer-idioma-baseado-no-bert-base-estudo-de-caso-em-12093d385e78#c572)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/), [Google Colab](https://colab.research.google.com/) and [AI Lab](https://ailab.unb.br/). ## Citation If you use our work, please cite: ```bibtex @inproceedings{pierreguillou2021bertbasecasedsquadv11portuguese, title={Portuguese BERT base cased QA (Question Answering), finetuned on SQUAD v1.1}, author={Pierre Guillou}, year={2021} } ```
pierreguillou/bert-large-cased-squad-v1.1-portuguese
pierreguillou
2022-01-04T09:57:00Z
1,067
45
transformers
[ "transformers", "pytorch", "tf", "bert", "question-answering", "bert-large", "pt", "dataset:brWaC", "dataset:squad", "dataset:squad_v1_pt", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - question-answering - bert - bert-large - pytorch datasets: - brWaC - squad - squad_v1_pt metrics: - squad widget: - text: "Quando começou a pandemia de Covid-19 no mundo?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China." - text: "Onde foi descoberta a Covid-19?" context: "A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China." --- # Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 ![Exemple of what can do the Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1](https://miro.medium.com/max/5256/1*QxyeAjT2V1OfE2B6nEcs3w.png) ## Introduction The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/). The language model used is the [BERTimbau Large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) (aka "bert-large-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. ## Informations on the method used All the informations are in the blog post : [NLP | Como treinar um modelo de Question Answering em qualquer linguagem baseado no BERT large, melhorando o desempenho do modelo utilizando o BERT base? (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96) ## Notebook in GitHub [question_answering_BERT_large_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb) ([nbviewer version](https://nbviewer.jupyter.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb)) ## Performance The results obtained are the following: ``` f1 = 84.43 (against 82.50 for the base model) exact match = 72.68 (against 70.49 for the base model) ``` ## How to use the model... with Pipeline ```python import transformers from transformers import pipeline # source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19 context = r""" A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China. Em 20 de janeiro de 2020, a Organização Mundial da Saúde (OMS) classificou o surto como Emergência de Saúde Pública de Âmbito Internacional e, em 11 de março de 2020, como pandemia. Em 18 de junho de 2021, 177 349 274 casos foram confirmados em 192 países e territórios, com 3 840 181 mortes atribuídas à doença, tornando-se uma das pandemias mais mortais da história. Os sintomas de COVID-19 são altamente variáveis, variando de nenhum a doenças com risco de morte. O vírus se espalha principalmente pelo ar quando as pessoas estão perto umas das outras. Ele deixa uma pessoa infectada quando ela respira, tosse, espirra ou fala e entra em outra pessoa pela boca, nariz ou olhos. Ele também pode se espalhar através de superfícies contaminadas. As pessoas permanecem contagiosas por até duas semanas e podem espalhar o vírus mesmo se forem assintomáticas. """ model_name = 'pierreguillou/bert-large-cased-squad-v1.1-portuguese' nlp = pipeline("question-answering", model=model_name) question = "Quando começou a pandemia de Covid-19 no mundo?" result = nlp(question=question, context=context) print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}") # Answer: 'dezembro de 2019', score: 0.5087, start: 290, end: 306 ``` ## How to use the model... with the Auto classes ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese") model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese") ``` Or just clone the model repo: ```python git lfs install git clone https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese # if you want to clone without large files – just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 ``` ## Limitations and bias The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases. ## Author Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96#c2f5)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [AI Lab](https://ailab.unb.br/). ## Citation If you use our work, please cite: ```bibtex @inproceedings{pierreguillou2021bertlargecasedsquadv11portuguese, title={Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1}, author={Pierre Guillou}, year={2021} } ```
philschmid/gbert-base-germaner
philschmid
2022-01-04T08:55:58Z
9
3
transformers
[ "transformers", "tf", "tensorboard", "bert", "token-classification", "de", "dataset:germaner", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - de license: mit widget: - text: | Philipp ist 26 Jahre alt und lebt in Nürnberg, Deutschland. Derzeit arbeitet er als Machine Learning Engineer und Tech Lead bei Hugging Face, um künstliche Intelligenz durch Open Source und Open Science zu demokratisieren. datasets: - germaner metrics: - precision - recall - f1 - accuracy model-index: - name: gbert-base-germaner results: - task: name: Token Classification type: token-classification dataset: name: germaner type: germaner args: default metrics: - name: precision type: precision value: 0.8520523797532108 - name: recall type: recall value: 0.8754204398447607 - name: f1 type: f1 value: 0.8635783563042368 - name: accuracy type: accuracy value: 0.976147969774973 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gbert-base-germaner This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the germaner dataset. It achieves the following results on the evaluation set: - precision: 0.8521 - recall: 0.8754 - f1: 0.8636 - accuracy: 0.9761 If you want to learn how to fine-tune BERT yourself using Keras and Tensorflow check out this blog post: https://www.philschmid.de/huggingface-transformers-keras-tf ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - num_train_epochs: 5 - train_batch_size: 16 - eval_batch_size: 32 - learning_rate: 2e-05 - weight_decay_rate: 0.01 - num_warmup_steps: 0 - fp16: True ### Framework versions - Transformers 4.14.1 - Datasets 1.16.1 - Tokenizers 0.10.3
pierreguillou/bert-base-cased-pt-lenerbr
pierreguillou
2022-01-04T08:51:23Z
76
6
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "pt", "dataset:pierreguillou/lener_br_finetuning_language_model", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - pt tags: - generated_from_trainer datasets: - pierreguillou/lener_br_finetuning_language_model model-index: - name: checkpoints results: - task: name: Fill Mask type: fill-mask dataset: name: pierreguillou/lener_br_finetuning_language_model type: pierreguillou/lener_br_finetuning_language_model metrics: - name: Loss type: loss value: 1.352389 widget: - text: "Com efeito, se tal fosse possível, o Poder [MASK] – que não dispõe de função legislativa – passaria a desempenhar atribuição que lhe é institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, competência que não lhe pertence, com evidente transgressão ao princípio constitucional da separação de poderes." --- ## (BERT base) Language modeling in the legal domain in Portuguese (LeNER-Br) **bert-base-cased-pt-lenerbr** is a Language Model in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. You can check as well the [version large of this model](https://huggingface.co/pierreguillou/bert-large-cased-pt-lenerbr). ## Blog post This language model is used to get a NER model on the Portuguese judicial domain. You can check the fine-tuned NER model at [pierreguillou/ner-bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-base-cased-pt-lenerbr). All informations and links are in this blog post: [NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021) ## Widget & APP You can test this model into the widget of this page. ## Using the model for inference in production ```` # install pytorch: check https://pytorch.org/ # !pip install transformers from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-base-cased-pt-lenerbr") model = AutoModelForMaskedLM.from_pretrained("pierreguillou/bert-base-cased-pt-lenerbr") ```` ## Training procedure ## Notebook The notebook of finetuning ([Finetuning_language_model_BERtimbau_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/Finetuning_language_model_BERtimbau_LeNER_Br.ipynb)) is in github. ### Training results ```` Num examples = 3227 Num Epochs = 5 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 2020 Step Training Loss Validation Loss 100 1.988700 1.616412 200 1.724900 1.561100 300 1.713400 1.499991 400 1.687400 1.451414 500 1.579700 1.433665 600 1.556900 1.407338 700 1.591400 1.421942 800 1.546000 1.406395 900 1.510100 1.352389 1000 1.507100 1.394799 1100 1.462200 1.36809373471 ````
addy88/programming-lang-identifier
addy88
2022-01-04T04:22:07Z
8
6
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
This model is funetune version of Codebert in roberta. On CodeSearchNet. ### Quick start: from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("addy88/programming-lang-identifier") model = AutoModelForSequenceClassification.from_pretrained("addy88/programming-lang-identifier") input_ids = tokenizer.encode(CODE_TO_IDENTIFY) logits = model(input_ids)[0] language_idx = logits.argmax() # index for the resulting label ###
Ayham/albert_gpt2_Full_summarization_cnndm
Ayham
2022-01-03T23:42:44Z
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: albert_gpt2_Full_summarization_cnndm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert_gpt2_Full_summarization_cnndm This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
hogger32/distilbert-base-uncased-finetuned-squad
hogger32
2022-01-03T15:39:48Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.7004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.316 | 1.0 | 2363 | 2.0234 | | 2.0437 | 2.0 | 4726 | 1.7881 | | 1.9058 | 3.0 | 7089 | 1.7004 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
deepdml/wav2vec2-base-timit-demo-colab
deepdml
2022-01-03T15:04:23Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4798 - Wer: 0.3474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5229 | 4.0 | 500 | 1.6557 | 1.0422 | | 0.6618 | 8.0 | 1000 | 0.4420 | 0.4469 | | 0.2211 | 12.0 | 1500 | 0.4705 | 0.4002 | | 0.1281 | 16.0 | 2000 | 0.4347 | 0.3688 | | 0.0868 | 20.0 | 2500 | 0.4653 | 0.3590 | | 0.062 | 24.0 | 3000 | 0.4747 | 0.3519 | | 0.0472 | 28.0 | 3500 | 0.4798 | 0.3474 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
vsalamand/fr_pipeline
vsalamand
2022-01-03T12:54:23Z
2
0
spacy
[ "spacy", "token-classification", "fr", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - fr model-index: - name: fr_pipeline results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9011406844 - name: NER Recall type: recall value: 0.92578125 - name: NER F Score type: f_score value: 0.9132947977 --- | Feature | Description | | --- | --- | | **Name** | `fr_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `FOOD PRODUCT`, `INGREDIENT`, `MEASURE`, `QUANTITY` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 91.33 | | `ENTS_P` | 90.11 | | `ENTS_R` | 92.58 | | `TOK2VEC_LOSS` | 8670.94 | | `NER_LOSS` | 4165.31 |
hiiamsid/sentence_similarity_hindi
hiiamsid
2022-01-03T11:25:33Z
236
6
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "hi", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity language: - hi tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # hiiamsid/sentence_similarity_hindi This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('hiiamsid/sentence_similarity_hindi') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results ``` cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman 0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483 ``` For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 341 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 137, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> - Model: [setu4993/LaBSE] (https://huggingface.co/setu4993/LaBSE) - Sentence Transformers [Semantic Textual Similarity] (https://www.sbert.net/examples/training/sts/README.html)
vdivya/wav2vec2-base-timit-demo-colab
vdivya
2022-01-03T09:51:04Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4630 - Wer: 0.3399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4454 | 4.0 | 500 | 1.2920 | 0.9381 | | 0.5869 | 8.0 | 1000 | 0.4634 | 0.4297 | | 0.2216 | 12.0 | 1500 | 0.4481 | 0.3778 | | 0.1283 | 16.0 | 2000 | 0.4651 | 0.3741 | | 0.0872 | 20.0 | 2500 | 0.4762 | 0.3548 | | 0.0635 | 24.0 | 3000 | 0.4495 | 0.3513 | | 0.0482 | 28.0 | 3500 | 0.4630 | 0.3399 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/chheplo
huggingtweets
2022-01-03T05:23:33Z
105
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/chheplo/1641187409438/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1477561163961438208/7HnhxOo__400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pratik Desai</div> <div style="text-align: center; font-size: 14px;">@chheplo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pratik Desai. | Data | Pratik Desai | | --- | --- | | Tweets downloaded | 3248 | | Retweets | 362 | | Short tweets | 139 | | Tweets kept | 2747 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4tv1dtfa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chheplo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chheplo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lgris/bp-mls100-xlsr
lgris
2022-01-02T23:54:18Z
108
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "pt", "portuguese-speech-corpus", "PyTorch", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 --- # mls100-xlsr: Wav2vec 2.0 with MLS Dataset This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Multilingual Librispeech in Portuguese (MLS)](http://www.openslr.org/94/) dataset. In this notebook the model is tested against other available Brazilian Portuguese datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | | -- | 5.4h | | Common Voice | | -- | 9.5h | | LaPS BM | | -- | 0.1h | | MLS | 161h | -- | 3.7h | | Multilingual TEDx (Portuguese) | | -- | 1.8h | | SID | | -- | 1.0h | | VoxForge | | -- | 0.1h | | Total | 161h | -- | 21.6h | #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | mls100 (demonstration below) | 0.192 | 0.260 | 0.162 | 0.163 | 0.268 | 0.492 | 0.268 | 0.258 | | mls100 + 4-gram (demonstration below) | 0.087 | 0.173 | 0.077 | 0.126 | 0.245 | 0.415 | 0.218 | 0.191 | ## Demonstration ```python MODEL_NAME = "lgris/mls100-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ```python %cd bp_dataset/ ``` /content/bp_dataset ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.192586382955233 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.2604333640312866 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.16259469696969692 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.16343014413283674 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.2682880375992515 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.49252836581485837 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.2686972402597403 ### Tests with LM ```python !rm -rf ~/.cache %cd /content/ # !gdown --id '1d13Onxy9ubmJZORZ8FO2vnsnl36QMiUc' # trained with wikipedia; stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') %cd bp_dataset/ ``` /content/bp_dataset #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.0878818926974661 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.173303354010221 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.07691919191919189 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.12624377042839321 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.24545473435776916 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.4156272215612955 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.21832386363636366
coldfir3/xlm-roberta-base-finetuned-panx-en
coldfir3
2022-01-02T19:20:00Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7075365579302588 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3925 - F1: 0.7075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 | | 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 | | 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
durgaamma2005/indic-transformers-te-distilbert
durgaamma2005
2022-01-02T17:56:41Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:wikiann", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: indic-transformers-te-distilbert results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: te metrics: - name: Precision type: precision value: 0.5657225853304285 - name: Recall type: recall value: 0.6486261448792673 - name: F1 type: f1 value: 0.604344453064391 - name: Accuracy type: accuracy value: 0.9049186160277506 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indic-transformers-te-distilbert This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2940 - Precision: 0.5657 - Recall: 0.6486 - F1: 0.6043 - Accuracy: 0.9049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 125 | 0.3629 | 0.4855 | 0.5287 | 0.5062 | 0.8826 | | No log | 2.0 | 250 | 0.3032 | 0.5446 | 0.6303 | 0.5843 | 0.9002 | | No log | 3.0 | 375 | 0.2940 | 0.5657 | 0.6486 | 0.6043 | 0.9049 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
juierror/wav2vec2-large-xls-r-thai-test
juierror
2022-01-02T14:18:08Z
64
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-thai-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-thai-test This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7728 - eval_wer: 0.9490 - eval_runtime: 678.2819 - eval_samples_per_second: 3.226 - eval_steps_per_second: 0.404 - epoch: 2.56 - step: 600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
NbAiLab/roberta_des_ada_128
NbAiLab
2022-01-02T13:20:47Z
5
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use.
AlekseyKulnevich/Pegasus-HeaderGeneration
AlekseyKulnevich
2022-01-02T12:36:45Z
110
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
**Usage HuggingFace Transformers for header generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] headers = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) headers = tokenizer.batch_decode(headers, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` headers = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=20) tokenizer.batch_decode(headers, skip_special_tokens=True) ``` output: 1. *the impact of climate change on tropical cyclones* 2. *the impact of human induced climate change on tropical cyclones* 3. *the impact of climate change on tropical cyclone formation in the midlatitudes* 4. *how climate change will expand the range of tropical cyclones?* 5. *the impact of climate change on tropical cyclones in the midlatitudes* 6. *global warming will expand the range of tropical cyclones* 7. *climate change will expand the range of tropical cyclones* 8. *the impact of climate change on tropical cyclone formation* 9. *the impact of human induced climate change on tropical cyclone formation* 10. *tropical cyclones in the mid-latitudes* 11. *climate change will expand the range of tropical cyclones in the middle latitudes* 12. *global warming will expand the range of tropical cyclones, a new study says* 13. *the impacts of climate change on tropical cyclones* 14. *the impact of global warming on tropical cyclones* 15. *climate change will expand the range of tropical cyclones, a new study says* 16. *global warming will expand the range of tropical cyclones in the middle latitudes* 17. *the effects of climate change on tropical cyclones* 18. *how climate change will expand the range of tropical cyclones* 19. *climate change will expand the range of tropical cyclones over the equator* 20. *the impact of human induced climate change on tropical cyclones.* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
AlekseyKulnevich/Pegasus-QuestionGeneration
AlekseyKulnevich
2022-01-02T12:24:37Z
29
1
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
**Usage HuggingFace Transformers for question generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(questions, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) tokenizer.batch_decode(questions, skip_special_tokens=True) ``` output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
addy88/perceiver_imdb
addy88
2022-01-02T11:20:07Z
5
0
transformers
[ "transformers", "pytorch", "perceiver", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverTokenizer, PerceiverForMaskedLM tokenizer = PerceiverTokenizer.from_pretrained("addy88/perceiver_imdb") model = PerceiverForMaskedLM.from_pretrained("addy88/perceiver_imdb") text = "This is an incomplete sentence where some words are missing." # prepare input encoding = tokenizer(text, padding="max_length", return_tensors="pt") # mask " missing.". Note that the model performs much better if the masked span starts with a space. encoding.input_ids[0, 52:61] = tokenizer.mask_token_id inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device) # forward pass outputs = model(inputs=inputs, attention_mask=input_mask) logits = outputs.logits masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1) print(tokenizer.decode(masked_tokens_predictions)) >>> should print " missing." ```
addy88/gpt-j-8bit
addy88
2022-01-02T06:34:27Z
5
2
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "arxiv:2106.09685", "arxiv:2110.02861", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
This Model is 8bit Version of EleutherAI/gpt-j-6B. It is converted by Facebook's bitsandbytes library. The original GPT-J takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. So for finetuning on single GPU This model is converted into 8bit. Here's how to run it: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1KNf5siQdM7ILQM-pHsP6gNVPKl1SJdU1) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://colab.research.google.com/drive/1FxGeYQyE7cx9VNCBC4gUyRVZGORW7c6g) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Can I use this technique with other models? The model was converted using [this notebook](https://colab.research.google.com/drive/1rwxh0XRdVi8VEbTx97l9xXr4JbRhZaq5#scrollTo=CX3VHn-J1Zer). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
addy88/gptj8
addy88
2022-01-02T06:33:57Z
14
1
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "arxiv:2106.09685", "arxiv:2110.02861", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
This Model is 8bit Version of EleutherAI/gpt-j-6B. It is converted by Facebook's bitsandbytes library. The original GPT-J takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. So for finetuning on single GPU This model is converted into 8bit. Here's how to run it: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1KNf5siQdM7ILQM-pHsP6gNVPKl1SJdU1) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://colab.research.google.com/drive/1FxGeYQyE7cx9VNCBC4gUyRVZGORW7c6g) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Can I use this technique with other models? The model was converted using [this notebook](https://colab.research.google.com/drive/1rwxh0XRdVi8VEbTx97l9xXr4JbRhZaq5#scrollTo=CX3VHn-J1Zer). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
ykliu1892/opus-mt-zh-de-tuned-Tatoeba-small
ykliu1892
2022-01-02T04:09:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-zh-de-tuned-Tatoeba-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-zh-de-tuned-Tatoeba-small This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-de](https://huggingface.co/Helsinki-NLP/opus-mt-zh-de) on a refined dataset of Tatoeba German - Chinese corpus https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README.md. It achieves the following results on the evaluation set: - Loss: 2.2703 - Bleu: 16.504 - Gen Len: 16.6531 ## Model description More information needed ## Intended uses & limitations Prefix used during fine-tuning: "将中文翻译成德语". This prefix is also recommended in prediction. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 2.7229 | 0.24 | 16000 | 2.5605 | 14.1956 | 16.2206 | | 2.5988 | 0.49 | 32000 | 2.4447 | 14.8619 | 16.2726 | | 2.515 | 0.73 | 48000 | 2.3817 | 15.3212 | 16.2823 | | 2.4683 | 0.97 | 64000 | 2.3367 | 15.9043 | 16.7138 | | 2.3873 | 1.22 | 80000 | 2.3115 | 16.1037 | 16.6369 | | 2.3792 | 1.46 | 96000 | 2.2919 | 16.2957 | 16.6304 | | 2.3626 | 1.7 | 112000 | 2.2790 | 16.2995 | 16.6235 | | 2.3353 | 1.95 | 128000 | 2.2703 | 16.504 | 16.6531 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
s3h/arabert-gec-v2-2
s3h
2022-01-01T18:50:19Z
3
0
transformers
[ "transformers", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: s3h/arabic-t5-small-finetuned-gec results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # s3h/arabic-t5-small-finetuned-gec This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0930 - Validation Loss: 0.9132 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0930 | 0.9132 | 0 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
s3h/arabic-t5-small-finetuned-gec
s3h
2022-01-01T18:36:08Z
9
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: s3h/arabic-t5-small-finetuned-gec results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # s3h/arabic-t5-small-finetuned-gec This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0930 - Validation Loss: 0.9132 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0930 | 0.9132 | 0 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
s3h/arabert-gec-v2-3
s3h
2022-01-01T16:12:05Z
4
0
transformers
[ "transformers", "tf", "bert", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: s3h/arabert-gec-v2-2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # s3h/arabert-gec-v2-2 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.3883 - Validation Loss: 8.2485 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 8.3883 | 8.2485 | 0 | ### Framework versions - Transformers 4.14.1 - TensorFlow 2.6.2 - Datasets 1.17.0 - Tokenizers 0.10.3
mattchurgin/distilbert-sst2
mattchurgin
2021-12-31T23:08:41Z
21
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model-index: - name: distilbert-sst2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - eval_loss: 0.4182 - eval_accuracy: 0.8911 - eval_runtime: 1.8021 - eval_samples_per_second: 483.882 - eval_steps_per_second: 60.485 - epoch: 0.8 - step: 6700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
mattchurgin/distilbert-mrpc
mattchurgin
2021-12-31T22:53:22Z
16
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8480392156862745 - name: F1 type: f1 value: 0.8934707903780068 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6783 - Accuracy: 0.8480 - F1: 0.8935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5916 | 0.22 | 100 | 0.5676 | 0.7157 | 0.8034 | | 0.5229 | 0.44 | 200 | 0.4534 | 0.7770 | 0.8212 | | 0.5055 | 0.65 | 300 | 0.4037 | 0.8137 | 0.8762 | | 0.4597 | 0.87 | 400 | 0.3706 | 0.8407 | 0.8893 | | 0.4 | 1.09 | 500 | 0.4590 | 0.8113 | 0.8566 | | 0.3498 | 1.31 | 600 | 0.4196 | 0.8554 | 0.8974 | | 0.2916 | 1.53 | 700 | 0.4606 | 0.8554 | 0.8933 | | 0.3309 | 1.74 | 800 | 0.5162 | 0.8578 | 0.9027 | | 0.3788 | 1.96 | 900 | 0.3911 | 0.8529 | 0.8980 | | 0.2059 | 2.18 | 1000 | 0.5842 | 0.8554 | 0.8995 | | 0.1595 | 2.4 | 1100 | 0.5701 | 0.8578 | 0.8975 | | 0.1205 | 2.61 | 1200 | 0.6905 | 0.8407 | 0.8889 | | 0.174 | 2.83 | 1300 | 0.6783 | 0.8480 | 0.8935 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
avichr/heBERT_sentiment_analysis
avichr
2021-12-31T16:08:22Z
17,326
26
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1810.04805", "arxiv:2102.01909", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). <br> HeBert was trained on three datasets: 1. A Hebrew version of OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences. 2. A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences 3. Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks. ### Emotion UGC Data Description Our User-Generated Content (UGC) is comments written on articles collected from 3 major news sites, between January 2020 to August 2020, Total data size of ~150 MB of data, including over 7 million words and 350K sentences. 4000 sentences annotated by crowd members (3-10 annotators per sentence) for 8 emotions (anger, disgust, expectation, fear, happy, sadness, surprise, and trust) and overall sentiment/polarity <br> In order to validate the annotation, we search for an agreement between raters to emotion in each sentence using Krippendorff's alpha [(krippendorff, 1970)](https://journals.sagepub.com/doi/pdf/10.1177/001316447003000105). We left sentences that got alpha > 0.7. Note that while we found a general agreement between raters about emotions like happiness, trust, and disgust, there are few emotions with general disagreement about them, apparently given the complexity of finding them in the text (e.g. expectation and surprise). ### Performance #### sentiment analysis | | precision | recall | f1-score | |--------------|-----------|--------|----------| | natural | 0.83 | 0.56 | 0.67 | | positive | 0.96 | 0.92 | 0.94 | | negative | 0.97 | 0.99 | 0.98 | | accuracy | | | 0.97 | | macro avg | 0.92 | 0.82 | 0.86 | | weighted avg | 0.96 | 0.97 | 0.96 | ## How to use ### For masked-LM model (can be fine-tunned to any down-stream task) ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT") model = AutoModel.from_pretrained("avichr/heBERT") from transformers import pipeline fill_mask = pipeline( "fill-mask", model="avichr/heBERT", tokenizer="avichr/heBERT" ) fill_mask("הקורונה לקחה את [MASK] ולנו לא נשאר דבר.") ``` ### For sentiment classification model (polarity ONLY): ``` from transformers import AutoTokenizer, AutoModel, pipeline tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis") # how to use? sentiment_analysis = pipeline( "sentiment-analysis", model="avichr/heBERT_sentiment_analysis", tokenizer="avichr/heBERT_sentiment_analysis", return_all_scores = True ) >>> sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים') [[{'label': 'natural', 'score': 0.9978172183036804}, {'label': 'positive', 'score': 0.0014792329166084528}, {'label': 'negative', 'score': 0.0007035882445052266}]] >>> sentiment_analysis('קפה זה טעים') [[{'label': 'natural', 'score': 0.00047328314394690096}, {'label': 'possitive', 'score': 0.9994067549705505}, {'label': 'negetive', 'score': 0.00011996887042187154}]] >>> sentiment_analysis('אני לא אוהב את העולם') [[{'label': 'natural', 'score': 9.214012970915064e-05}, {'label': 'possitive', 'score': 8.876807987689972e-05}, {'label': 'negetive', 'score': 0.9998190999031067}]] ``` Our model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda) ## Stay tuned! We are still working on our model and will edit this page as we progress.<br> Note that we have released only sentiment analysis (polarity) at this point, emotion detection will be released later on.<br> our git: https://github.com/avichaychriqui/HeBERT ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \\\\\\\\\\\\\\\\& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ```
model-attribution-challenge/gpt-neo-125M
model-attribution-challenge
2021-12-31T13:46:51Z
6
1
transformers
[ "transformers", "pytorch", "jax", "rust", "gpt_neo", "text-generation", "text generation", "causal-lm", "en", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-07-26T13:34:26Z
--- language: - en tags: - text generation - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
nwl/DialoGPT-small-enhypen
nwl
2021-12-31T13:38:51Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational ---
airKlizz/mt5-base-wikinewssum-english-100
airKlizz
2021-12-31T12:02:27Z
14
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-wikinewssum-english-100 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-wikinewssum-english-100 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.6225 - Rouge1: 3.909 - Rouge2: 0.9312 - Rougel: 3.3835 - Rougelsum: 3.7786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 | | No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 | | No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 | | No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 | | No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 | | No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 | | No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 | | No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Muennighoff/SBERT-base-nli-stsb-v2
Muennighoff
2021-12-31T07:59:14Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- This model is used in "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
NahedAbdelgaber/distilbert-base-uncased-finetuned-evaluating-student-writing
NahedAbdelgaber
2021-12-31T06:28:07Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-evaluating-student-writing results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-evaluating-student-writing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3485 | 1.0 | 878 | 2.0959 | | 2.1407 | 2.0 | 1756 | 2.0162 | | 2.0843 | 3.0 | 2634 | 1.9846 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
huggingtweets/hey_ash21
huggingtweets
2021-12-31T04:19:10Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hey_ash21/1640924344980/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1364393973331021830/i7JjvUhX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ash 🫀</div> <div style="text-align: center; font-size: 14px;">@hey_ash21</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ash 🫀. | Data | ash 🫀 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 193 | | Short tweets | 132 | | Tweets kept | 2917 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tujmcza/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hey_ash21's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwdhn6q) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwdhn6q/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hey_ash21') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
federicopascual/finetuning-sentiment-analysis-model-3000-samples
federicopascual
2021-12-30T20:32:34Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-analysis-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.88125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-analysis-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3130 - Accuracy: 0.8733 - F1: 0.8812 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
federicopascual/finetune-sentiment-analysis-model-3000-samples
federicopascual
2021-12-30T19:29:48Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetune-sentiment-analysis-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8866666666666667 - name: F1 type: f1 value: 0.8944099378881988 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-sentiment-analysis-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4558 - Accuracy: 0.8867 - F1: 0.8944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ysslang/autonlp-test-459011902
ysslang
2021-12-30T17:05:31Z
104
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "zh", "dataset:ysslang/autonlp-data-test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: zh widget: - text: "I love AutoNLP 🤗" datasets: - ysslang/autonlp-data-test co2_eq_emissions: 10.9230691350863 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 459011902 - CO2 Emissions (in grams): 10.9230691350863 ## Validation Metrics - Loss: 0.7189690470695496 - Accuracy: 0.7453263867606497 - Macro F1: 0.630810193227066 - Micro F1: 0.7453263867606497 - Weighted F1: 0.7399327942874923 - Macro Precision: 0.656237447101913 - Micro Precision: 0.7453263867606497 - Weighted Precision: 0.7410161412822164 - Macro Recall: 0.6340140718425453 - Micro Recall: 0.7453263867606497 - Weighted Recall: 0.7453263867606497 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ysslang/autonlp-test-459011902 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
scasutt/Prototype_training_large_model
scasutt
2021-12-30T14:40:39Z
161
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Prototype_training_large_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Prototype_training_large_model This model is a fine-tuned version of [scasutt/Prototype_training_large_model](https://huggingface.co/scasutt/Prototype_training_large_model) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2585 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.0545 | 1.47 | 100 | 3.2604 | 1.0 | | 3.0413 | 2.93 | 200 | 3.2585 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
pinecone/bert-qqp-cross-encoder
pinecone
2021-12-30T12:11:52Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# Quora-QP Cross Encoder Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
pinecone/bert-medqp-cross-encoder
pinecone
2021-12-30T12:11:30Z
7
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Med-QP Cross Encoder Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
pinecone/bert-stsb-cross-encoder
pinecone
2021-12-30T12:11:03Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# STSb Cross Encoder Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
NahedAbdelgaber/distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
NahedAbdelgaber
2021-12-30T06:58:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5869 | 1.0 | 157 | 2.3949 | | 2.4142 | 2.0 | 314 | 2.3551 | | 2.3792 | 3.0 | 471 | 2.2840 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
lgris/distilxlsr_bp_4-12
lgris
2021-12-30T00:38:04Z
161
0
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "speech", "pt", "arxiv:2110.01900", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: pt tags: - speech license: apache-2.0 --- # DistilXLSR-53 for BP [DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)). **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
lgris/distilxlsr_bp_8-12
lgris
2021-12-30T00:37:53Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "speech", "pt", "arxiv:2110.01900", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: pt tags: - speech license: apache-2.0 --- # DistilXLSR-53 for BP [DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)). **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
lgris/distilxlsr_bp_12-16
lgris
2021-12-30T00:37:12Z
161
1
transformers
[ "transformers", "pytorch", "wav2vec2", "feature-extraction", "speech", "pt", "arxiv:2110.01900", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: pt tags: - speech license: apache-2.0 --- # DistilXLSR-53 for BP [DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)). **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
danicodes/autonlp-legal-text-summary-457311749
danicodes
2021-12-29T22:18:48Z
7
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:danicodes/autonlp-data-legal-text-summary", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - danicodes/autonlp-data-legal-text-summary co2_eq_emissions: 10.148805588432941 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 457311749 - CO2 Emissions (in grams): 10.148805588432941 ## Validation Metrics - Loss: 1.647747278213501 - Rouge1: 32.4854 - Rouge2: 19.8974 - RougeL: 30.0602 - RougeLsum: 29.9377 - Gen Len: 46.6556 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/danicodes/autonlp-legal-text-summary-457311749 ```
BigSalmon/InformalToFormalLincoln17
BigSalmon
2021-12-29T21:25:31Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Informal to Formal: ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln17") model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln17") ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time) ``` ``` https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ````
pierreguillou/ner-bert-base-cased-pt-lenerbr
pierreguillou
2021-12-29T19:32:39Z
108,865
15
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "pt", "dataset:lener_br", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - pt tags: - generated_from_trainer datasets: - lener_br metrics: - precision - recall - f1 - accuracy model-index: - name: checkpoints results: - task: name: Token Classification type: token-classification dataset: name: lener_br type: lener_br metrics: - name: F1 type: f1 value: 0.8926146010186757 - name: Precision type: precision value: 0.8810222036028488 - name: Recall type: recall value: 0.9045161290322581 - name: Accuracy type: accuracy value: 0.9759397808828684 - name: Loss type: loss value: 0.18803243339061737 widget: - text: "Ao Instituto Médico Legal da jurisdição do acidente ou da residência cumpre fornecer, no prazo de 90 dias, laudo à vítima (art. 5, § 5, Lei n. 6.194/74 de 19 de dezembro de 1974), função técnica que pode ser suprida por prova pericial realizada por ordem do juízo da causa, ou por prova técnica realizada no âmbito administrativo que se mostre coerente com os demais elementos de prova constante dos autos." - text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial." - text: "Dispõe sobre o estágio de estudantes; altera a redação do art. 428 da Consolidação das Leis do Trabalho – CLT, aprovada pelo Decreto-Lei no 5.452, de 1o de maio de 1943, e a Lei no 9.394, de 20 de dezembro de 1996; revoga as Leis nos 6.494, de 7 de dezembro de 1977, e 8.859, de 23 de março de 1994, o parágrafo único do art. 82 da Lei no 9.394, de 20 de dezembro de 1996, e o art. 6o da Medida Provisória no 2.164-41, de 24 de agosto de 2001; e dá outras providências." --- ## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br) **ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective. Due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*): - **f1**: 0.8926146010186757 - **precision**: 0.8810222036028488 - **recall**: 0.9045161290322581 - **accuracy**: 0.9759397808828684 - **loss**: 0.18803243339061737 Check as well the [large version of this model](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr) with a f1 of 0.908. **Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model): - **f1**: 0.8716487228203504 - **precision**: 0.8559286898839138 - **recall**: 0.8879569892473118 - **accuracy**: 0.9755893153732458 - **loss**: 0.1133928969502449 ## Blog post [NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021) ## Widget & App You can test this model into the widget of this page. Use as well the [NER App](https://huggingface.co/spaces/pierreguillou/ner-bert-pt-lenerbr) that allows comparing the 2 BERT models (base and large) fitted in the NER task with the legal LeNER-Br dataset. ## Using the model for inference in production ```` # install pytorch: check https://pytorch.org/ # !pip install transformers from transformers import AutoModelForTokenClassification, AutoTokenizer import torch # parameters model_name = "pierreguillou/ner-bert-base-cased-pt-lenerbr" model = AutoModelForTokenClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) input_text = "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial." # tokenization inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors="pt") tokens = inputs.tokens() # get predictions outputs = model(**inputs).logits predictions = torch.argmax(outputs, dim=2) # print predictions for token, prediction in zip(tokens, predictions[0].numpy()): print((token, model.config.id2label[prediction])) ```` You can use pipeline, too. However, it seems to have an issue regarding to the max_length of the input sequence. ```` !pip install transformers import transformers from transformers import pipeline model_name = "pierreguillou/ner-bert-base-cased-pt-lenerbr" ner = pipeline( "ner", model=model_name ) ner(input_text) ```` ## Training procedure ### Notebook The notebook of finetuning ([HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/HuggingFace_Notebook_token_classification_NER_LeNER_Br.ipynb)) is in github. ### Hyperparameters #### batch, learning rate... - per_device_batch_size = 2 - gradient_accumulation_steps = 2 - learning_rate = 2e-5 - num_train_epochs = 10 - weight_decay = 0.01 - optimizer = AdamW - betas = (0.9,0.999) - epsilon = 1e-08 - lr_scheduler_type = linear - seed = 7 #### save model & load best model - save_total_limit = 2 - logging_steps = 300 - eval_steps = logging_steps - evaluation_strategy = 'steps' - logging_strategy = 'steps' - save_strategy = 'steps' - save_steps = logging_steps - load_best_model_at_end = True - fp16 = True #### get best model through a metric - metric_for_best_model = 'eval_f1' - greater_is_better = True ### Training results ```` Num examples = 7828 Num Epochs = 10 Instantaneous batch size per device = 2 Total train batch size (w. parallel, distributed & accumulation) = 4 Gradient Accumulation steps = 2 Total optimization steps = 19570 Step Training Loss Validation Loss Precision Recall F1 Accuracy 300 0.127600 0.178613 0.722909 0.741720 0.732194 0.948802 600 0.088200 0.136965 0.733636 0.867742 0.795074 0.963079 900 0.078000 0.128858 0.791912 0.838065 0.814335 0.965243 1200 0.077800 0.126345 0.815400 0.865376 0.839645 0.967849 1500 0.074100 0.148207 0.779274 0.895914 0.833533 0.960184 1800 0.059500 0.116634 0.830829 0.868172 0.849090 0.969342 2100 0.044500 0.208459 0.887150 0.816559 0.850392 0.960535 2400 0.029400 0.136352 0.867821 0.851398 0.859531 0.970271 2700 0.025000 0.165837 0.814881 0.878495 0.845493 0.961235 3000 0.038400 0.120629 0.811719 0.893763 0.850768 0.971506 3300 0.026200 0.175094 0.823435 0.882581 0.851983 0.962957 3600 0.025600 0.178438 0.881095 0.886022 0.883551 0.963689 3900 0.041000 0.134648 0.789035 0.916129 0.847846 0.967681 4200 0.026700 0.130178 0.821275 0.903226 0.860303 0.972313 4500 0.018500 0.139294 0.844016 0.875054 0.859255 0.971140 4800 0.020800 0.197811 0.892504 0.873118 0.882705 0.965883 5100 0.019300 0.161239 0.848746 0.888172 0.868012 0.967849 5400 0.024000 0.139131 0.837507 0.913333 0.873778 0.970591 5700 0.018400 0.157223 0.899754 0.864731 0.881895 0.970210 6000 0.023500 0.137022 0.883018 0.873333 0.878149 0.973243 6300 0.009300 0.181448 0.840490 0.900860 0.869628 0.968290 6600 0.019200 0.173125 0.821316 0.896559 0.857290 0.966736 6900 0.016100 0.143160 0.789938 0.904946 0.843540 0.968245 7200 0.017000 0.145755 0.823274 0.897634 0.858848 0.969037 7500 0.012100 0.159342 0.825694 0.883226 0.853491 0.967468 7800 0.013800 0.194886 0.861237 0.859570 0.860403 0.964771 8100 0.008000 0.140271 0.829914 0.896129 0.861752 0.971567 8400 0.010300 0.143318 0.826844 0.908817 0.865895 0.973578 8700 0.015000 0.143392 0.847336 0.889247 0.867786 0.973365 9000 0.006000 0.143512 0.847795 0.905591 0.875741 0.972892 9300 0.011800 0.138747 0.827133 0.894194 0.859357 0.971673 9600 0.008500 0.159490 0.837030 0.909032 0.871546 0.970028 9900 0.010700 0.159249 0.846692 0.910968 0.877655 0.970546 10200 0.008100 0.170069 0.848288 0.900645 0.873683 0.969113 10500 0.004800 0.183795 0.860317 0.899355 0.879403 0.969570 10800 0.010700 0.157024 0.837838 0.906667 0.870894 0.971094 11100 0.003800 0.164286 0.845312 0.880215 0.862410 0.970744 11400 0.009700 0.204025 0.884294 0.887527 0.885907 0.968854 11700 0.008900 0.162819 0.829415 0.887742 0.857588 0.970530 12000 0.006400 0.164296 0.852666 0.901075 0.876202 0.971414 12300 0.007100 0.143367 0.852959 0.895699 0.873807 0.973669 12600 0.015800 0.153383 0.859224 0.900430 0.879345 0.972679 12900 0.006600 0.173447 0.869954 0.899140 0.884306 0.970927 13200 0.006800 0.163234 0.856849 0.897204 0.876563 0.971795 13500 0.003200 0.167164 0.850867 0.907957 0.878485 0.971231 13800 0.003600 0.148950 0.867801 0.910538 0.888656 0.976961 14100 0.003500 0.155691 0.847621 0.907957 0.876752 0.974127 14400 0.003300 0.157672 0.846553 0.911183 0.877680 0.974584 14700 0.002500 0.169965 0.847804 0.917634 0.881338 0.973045 15000 0.003400 0.177099 0.842199 0.912473 0.875929 0.971155 15300 0.006000 0.164151 0.848928 0.911183 0.878954 0.973258 15600 0.002400 0.174305 0.847437 0.906667 0.876052 0.971765 15900 0.004100 0.174561 0.852929 0.907957 0.879583 0.972907 16200 0.002600 0.172626 0.843263 0.907097 0.874016 0.972100 16500 0.002100 0.185302 0.841108 0.907312 0.872957 0.970485 16800 0.002900 0.175638 0.840557 0.909247 0.873554 0.971704 17100 0.001600 0.178750 0.857056 0.906452 0.881062 0.971765 17400 0.003900 0.188910 0.853619 0.907957 0.879950 0.970835 17700 0.002700 0.180822 0.864699 0.907097 0.885390 0.972283 18000 0.001300 0.179974 0.868150 0.906237 0.886785 0.973060 18300 0.000800 0.188032 0.881022 0.904516 0.892615 0.972572 18600 0.002700 0.183266 0.868601 0.901290 0.884644 0.972298 18900 0.001600 0.180301 0.862041 0.903011 0.882050 0.972344 19200 0.002300 0.183432 0.855370 0.904301 0.879155 0.971109 19500 0.001800 0.183381 0.854501 0.904301 0.878696 0.971186 ```` ### Validation metrics by Named Entity ```` Num examples = 1177 {'JURISPRUDENCIA': {'f1': 0.7016574585635359, 'number': 657, 'precision': 0.6422250316055625, 'recall': 0.7732115677321156}, 'LEGISLACAO': {'f1': 0.8839681133746677, 'number': 571, 'precision': 0.8942652329749103, 'recall': 0.8739054290718039}, 'LOCAL': {'f1': 0.8253968253968254, 'number': 194, 'precision': 0.7368421052631579, 'recall': 0.9381443298969072}, 'ORGANIZACAO': {'f1': 0.8934049079754601, 'number': 1340, 'precision': 0.918769716088328, 'recall': 0.8694029850746269}, 'PESSOA': {'f1': 0.982653539615565, 'number': 1072, 'precision': 0.9877474081055608, 'recall': 0.9776119402985075}, 'TEMPO': {'f1': 0.9657657657657657, 'number': 816, 'precision': 0.9469964664310954, 'recall': 0.9852941176470589}, 'overall_accuracy': 0.9725722644643211, 'overall_f1': 0.8926146010186757, 'overall_precision': 0.8810222036028488, 'overall_recall': 0.9045161290322581} ````
airKlizz/mt5-base-wikinewssum-english
airKlizz
2021-12-29T19:10:05Z
46
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-wikinewssum-english results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-wikinewssum-english This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3040 - Rouge1: 8.9565 - Rouge2: 3.6563 - Rougel: 7.1346 - Rougelsum: 8.3802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 1010 | 2.4360 | 8.7287 | 3.5817 | 7.0093 | 8.1879 | | No log | 2.0 | 2020 | 2.3922 | 8.7227 | 3.5385 | 6.96 | 8.1887 | | No log | 3.0 | 3030 | 2.3422 | 8.8565 | 3.5772 | 7.0203 | 8.2957 | | No log | 4.0 | 4040 | 2.3288 | 8.89 | 3.645 | 7.0602 | 8.3314 | | 3.1253 | 5.0 | 5050 | 2.3209 | 8.868 | 3.6109 | 7.0537 | 8.299 | | 3.1253 | 6.0 | 6060 | 2.3127 | 8.9488 | 3.6615 | 7.1044 | 8.3785 | | 3.1253 | 7.0 | 7070 | 2.3056 | 8.9366 | 3.6507 | 7.1338 | 8.3615 | | 3.1253 | 8.0 | 8080 | 2.3040 | 8.9565 | 3.6563 | 7.1346 | 8.3802 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-2-bart-base
patrickvonplaten
2021-12-29T15:53:10Z
373
4
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "librispeech_asr", "generated_from_trainer", "asr_seq2esq", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer - asr_seq2esq model-index: - name: wav2vec2-2-bart-base results: [] widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac - example_title: Common Voice sample src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3 --- To rerun this experiment, please clone this directory and run: ```bash python create_model.py ``` followed by ```bash ./run_librispeech.sh ``` <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-2-bart-base This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) and [bart-base](https://huggingface.co/facebook/bart-base) on the librispeech_asr - clean dataset. It achieves the following results on the evaluation set: - Loss: 0.405 - Wer: 0.0728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results See Training Metrics Tab. ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
rexxar96/autonlp-sentiment-analysis-456211724
rexxar96
2021-12-29T14:47:09Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "unk", "dataset:rexxar96/autonlp-data-sentiment-analysis", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - rexxar96/autonlp-data-sentiment-analysis co2_eq_emissions: 22.28263989637389 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 456211724 - CO2 Emissions (in grams): 22.28263989637389 ## Validation Metrics - Loss: 0.23710417747497559 - Accuracy: 0.9119100357812234 - Precision: 0.8882611424984307 - Recall: 0.9461718488799733 - AUC: 0.974790366001874 - F1: 0.9163024121741946 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rexxar96/autonlp-sentiment-analysis-456211724 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("rexxar96/autonlp-sentiment-analysis-456211724", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("rexxar96/autonlp-sentiment-analysis-456211724", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
csukuangfj/test-data-for-optimized-transducer
csukuangfj
2021-12-29T09:31:30Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
See https://colab.research.google.com/drive/14MozS-9jWD3XQ0o-dZ-meqnblgHs70P2?usp=sharing
nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2
nlpconnect
2021-12-29T04:54:26Z
41
0
transformers
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "generated_from_keras_callback", "dpr", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback - dpr license: apache-2.0 model-index: - name: dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2 This model(google/bert_uncased_L-2_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR). It achieves the following results on the evaluation set: ## Evaluation data evaluation dataset: facebook-dpr-dev-dataset from official DPR github |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100| |---|---|---|---|---|---|---|---|---| |nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%| |nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|65.43%|71.99%|79.03%|83.24%|92.11%| |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%| evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only. |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100| |---|---|---|---|---|---|---|---|---| |nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%| |nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-test dataset|3452|200001|51.62%|61.09%|70.10%|76.07%|88.70%| |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%| Note: * means we have evaluated on same eval dataset. ### Usage (HuggingFace Transformers) ```python passage_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2") query_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2") p_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2") q_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2") def get_title_text_combined(passage_dicts): res = [] for p in passage_dicts: res.append(tuple((p['title'], p['text']))) return res processed_passages = get_title_text_combined(passage_dicts) def extracted_passage_embeddings(processed_passages, model_config): passage_inputs = tokenizer.batch_encode_plus( processed_passages, add_special_tokens=True, truncation=True, padding="max_length", max_length=model_config.passage_max_seq_len, return_token_type_ids=True ) passage_embeddings = passage_encoder.predict([np.array(passage_inputs['input_ids']), np.array(passage_inputs['attention_mask']), np.array(passage_inputs['token_type_ids'])], batch_size=512, verbose=1) return passage_embeddings passage_embeddings = extracted_passage_embeddings(processed_passages, model_config) def extracted_query_embeddings(queries, model_config): query_inputs = tokenizer.batch_encode_plus( queries, add_special_tokens=True, truncation=True, padding="max_length", max_length=model_config.query_max_seq_len, return_token_type_ids=True ) query_embeddings = query_encoder.predict([np.array(query_inputs['input_ids']), np.array(query_inputs['attention_mask']), np.array(query_inputs['token_type_ids'])], batch_size=512, verbose=1) return query_embeddings query_embeddings = extracted_query_embeddings(queries, model_config) ``` ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Tokenizers 0.10.3
huggingtweets/sh44sti
huggingtweets
2021-12-28T23:36:17Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sh44sti/1640734573813/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1202199127544737793/v_wbcf_Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Shasti</div> <div style="text-align: center; font-size: 14px;">@sh44sti</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Shasti. | Data | Shasti | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 32 | | Short tweets | 1087 | | Tweets kept | 2130 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/178u93b4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sh44sti's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2u8a1x7b) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2u8a1x7b/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sh44sti') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mrm8488/deberta-v3-small-goemotions
mrm8488
2021-12-28T23:12:12Z
13
1
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: deberta-v3-snall-goemotions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-snall-goemotions This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5638 - F1: 0.4241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.614 | 1.0 | 3082 | 1.5577 | 0.3663 | | 1.4338 | 2.0 | 6164 | 1.5580 | 0.4084 | | 1.2936 | 3.0 | 9246 | 1.5006 | 0.4179 | | 1.1531 | 4.0 | 12328 | 1.5348 | 0.4276 | | 1.0536 | 5.0 | 15410 | 1.5638 | 0.4241 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
espnet
2021-12-28T18:57:57Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:slue-voxceleb", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - slue-voxceleb license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best` This model was trained by Siddhant using slue-voxceleb recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 17758ad804fd7c4b6f88ef5601f475a241dc4605 pip install -e . cd egs2/slue-voxceleb/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Tue Dec 28 12:28:28 EST 2021` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a2` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `6bf3c2a4f138d35331634d2e879bbc5c32a5266e` - Commit date: `Mon Dec 22 15:41:32 EST 2021` ## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent - ASR config: [conf/train_asr.yaml](conf/tuning/train_asr_conformer.yaml) - token_type: word |dataset|Snt|Intent Classification Accuracy (%)|Intent Classification Macro F1 (%)| |---|---|---|---| |inference_asr_model_valid.acc.ave_10best/devel|955|80.2|29.7| ### Detailed Classification Report |dataset|Label|Snt|Prec|Recall|F1| |---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_10best/devel|Neutral|784|85|93|89| |inference_asr_model_valid.acc.ave_10best/devel|Positive|167|40|24|30| |inference_asr_model_valid.acc.ave_10best/devel|Negative|3|0|0|0| |inference_asr_model_valid.acc.ave_10best/devel|Mixed|1|0|0|0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_raw_en_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_word/train/speech_shape - exp/asr_stats_raw_en_word/train/text_shape.word valid_shape_file: - exp/asr_stats_raw_en_word/valid/speech_shape - exp/asr_stats_raw_en_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/devel/wav.scp - speech - sound - - dump/raw/devel/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁i - s - ▁and - '''' - ▁the - ▁a - ▁to - ▁it - Neutral - ▁you - ▁that - ▁of - t - ing - ▁in - ▁was - ed - ▁uh - ▁know - e - m - ▁he - y - er - ▁so - ▁we - re - a - o - d - ▁um - i - ▁s - c - ▁like - n - ▁is - ▁be - ▁f - ▁but - ▁c - Positive - en - l - ve - ▁just - ▁m - st - ▁they - le - an - ▁on - ▁p - u - ▁my - ar - p - ▁this - ▁for - ▁b - ▁think - in - ▁with - g - or - ▁h - r - ly - w - ▁me - ▁d - ▁e - ▁have - ▁she - it - ▁t - ▁what - b - ▁st - al - es - ▁there - ▁really - ic - ▁g - ▁as - ▁w - ▁l - ▁do - ll - v - ▁all - at - 'on' - as - ▁about - h - ▁not - ▁re - ▁o - ▁at - k - ▁don - ▁had - ▁when - ou - ent - is - ra - ▁who - ri - ▁go - se - f - ▁out - ▁get - ▁an - ▁people - nd - ▁kind - ▁very - ce - ▁because - ▁are - ion - ▁some - et - ▁can - ge - ▁or - me - ▁up - ▁n - ▁if - ▁no - ▁one - ▁were - ct - ▁mean - ad - ▁time - ▁ch - ▁then - ro - ▁ex - ▁mo - ▁her - ▁every - ▁would - ▁co - ▁work - ir - ▁sh - ay - ▁se - ol - ver - ▁su - ▁got - ▁k - th - ▁love - ▁from - ld - ation - ▁him - ▁said - ▁how - ▁well - ▁lot - ▁show - ch - ard - ie - ▁pro - ▁de - ▁gonna - ▁bo - ▁say - ▁see - ▁li - one - ▁his - ther - ▁been - ur - ▁any - ▁great - ▁ - ▁yeah - pe - ▁which - ▁come - ▁them - ot - ▁play - ab - ite - ▁way - ally - id - gh - ▁r - ▁sc - our - x - mp - ers - ong - ate - ▁your - ss - ast - ▁did - ▁sort - ▁am - am - and - ▁make - ant - ▁thing - ▁ha - ▁te - ▁has - ess - ▁v - ▁something - ▁back - ▁where - ▁things - red - ▁al - ut - el - ight - ment - un - ive - ▁th - ▁le - il - ▁j - op - ▁more - ▁ro - ill - ▁fi - ies - ▁much - ck - ▁ne - ▁wh - ▁always - ▁act - ine - pp - z - ▁now - ▁con - thing - ▁us - body - ▁want - ▁other - ort - ice - ▁doing - ▁sa - ▁feel - ow - ▁int - ne - ▁these - ▁could - ▁good - ▁cause - Negative - ▁actually - ▁wr - ▁little - ain - ▁being - ▁look - ▁into - ere - ul - ▁our - ▁guy - ▁first - ud - ▁by - ▁fun - ▁qu - ▁didn - us - ity - ▁jo - od - ▁u - ▁part - ▁off - ▁pre - ▁right - ▁film - ▁start - ok - ▁two - ving - ▁never - pt - um - te - ▁movie - ▁going - ff - nder - ke - ▁ag - ▁en - ▁try - ful - im - ays - ▁life - ▁different - ach - are - ▁di - ist - ▁oh - au - ▁po - nt - ▁com - all - ▁lo - om - ▁real - ▁y - ame - ▁went - ry - ber - ▁even - ci - ▁ho - ▁years - ▁their - ▁happen - ure - self - per - ▁pl - ▁those - ble - 'no' - ▁day - ▁take - ▁does - ien - ▁br - be - wn - ▁thought - ▁fe - ght - ▁tr - ▁story - ty - ▁down - ous - ish - ▁wom - ▁wanna - ▁put - ▁through - ide - ▁ab - ▁new - ▁also - ▁big - ▁call - ▁around - ▁character - ▁read - iz - ▁came - act - ily - ath - ag - ree - ▁per - ▁will - ▁mu - ▁talk - ▁over - ▁friend - atch - ▁bl - ade - ▁world - ▁many - ▁sp - sic - ▁cl - ▁bit - ▁man - ace - ▁person - ft - ip - ▁than - ▁wanted - ▁may - ven - ick - ious - ▁mar - ▁before - ▁rel - j - ting - ▁set - sh - ep - ▁un - ue - ▁aw - ▁find - ▁kid - tain - ▁such - ter - ▁end - ▁tw - ind - aking - ▁after - ▁fam - ars - ig - ore - ▁bec - ak - art - reat - ust - rou - ack - ▁ye - ould - ime - itt - ▁gu - qu - ose - fe - ▁wor - lf - alk - ▁charact - ▁mov - out - ich - ▁happ - ▁thou - ith - <mixed> - rom - ake - ▁diff - ▁char - na - round - ory - ink - ually - ▁gon - ▁pe - right - ody - ah - rie - riend - now - so - ause - ▁fil - ▁pers - fore - very - ▁differe - rough - q - ▁fir - anna - ways - ':' - '&' - fter - <sos/eos> transcript_token_list: null init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 postdecoder: null postdecoder_conf: {} required: - output_dir - token_list version: 0.10.3a2 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
maher13/English_ASR
maher13
2021-12-28T17:20:22Z
23
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: English_ASR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # English_ASR This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4971 - Wer: 0.3397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3432 | 4.0 | 500 | 1.1711 | 0.7767 | | 0.5691 | 8.0 | 1000 | 0.4613 | 0.4357 | | 0.2182 | 12.0 | 1500 | 0.4715 | 0.3853 | | 0.1267 | 16.0 | 2000 | 0.4307 | 0.3607 | | 0.0846 | 20.0 | 2500 | 0.4971 | 0.3537 | | 0.0608 | 24.0 | 3000 | 0.4712 | 0.3419 | | 0.0457 | 28.0 | 3500 | 0.4971 | 0.3397 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.13.3 - Tokenizers 0.10.3
facebook/wav2vec2-large-lv60
facebook
2021-12-28T12:45:09Z
10,076
8
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "pretraining", "speech", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Large-LV60 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
facebook/wav2vec2-base
facebook
2021-12-28T12:44:31Z
816,836
93
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Base [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
huggingtweets/computerdefeat2
huggingtweets
2021-12-28T07:00:15Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/computerdefeat2/1640674811300/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1457496446886981633/eBIe-Bef_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Merry Krismas🎄</div> <div style="text-align: center; font-size: 14px;">@computerdefeat2</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Merry Krismas🎄. | Data | Merry Krismas🎄 | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 827 | | Short tweets | 671 | | Tweets kept | 1740 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ut558an/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @computerdefeat2's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36nzt8vn) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36nzt8vn/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/computerdefeat2') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/sunnekochan
huggingtweets
2021-12-28T06:52:45Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/sunnekochan/1640674359998/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1475670958170157064/ykhcM2Wb_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Sun 🌻</div> <div style="text-align: center; font-size: 14px;">@sunnekochan</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Sun 🌻. | Data | Sun 🌻 | | --- | --- | | Tweets downloaded | 3243 | | Retweets | 706 | | Short tweets | 637 | | Tweets kept | 1900 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11t8eba2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sunnekochan's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lhat7qg6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lhat7qg6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sunnekochan') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ngrossman81
huggingtweets
2021-12-28T04:15:54Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/ngrossman81/1640664926929/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/805525876808892417/nSCRZS58_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nicholas Grossman</div> <div style="text-align: center; font-size: 14px;">@ngrossman81</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nicholas Grossman. | Data | Nicholas Grossman | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 272 | | Short tweets | 113 | | Tweets kept | 2864 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3gkanovn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ngrossman81's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18u9hhz0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18u9hhz0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ngrossman81') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
caioamb/bert-base-uncased-finetuned-md
caioamb
2021-12-28T01:22:50Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-md results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-md This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2415 | 1.0 | 1044 | 0.2084 | | 0.1244 | 2.0 | 2088 | 0.2903 | | 0.0427 | 3.0 | 3132 | 0.3329 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
MMG
2021-12-27T17:33:12Z
24
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "es", "dataset:sqac", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - sqac model-index: - name: bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac results: [] language: - es --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac This model is a fine-tuned version of [ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es](https://huggingface.co/ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es) on the sqac dataset. It achieves the following results on the evaluation set: - Loss: 0.9263 - {'exact_match': 65.55793991416309, 'f1': 82.72322701572416} ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
flexudy/cheapity3
flexudy
2021-12-27T13:06:27Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Cheapity3 🐷 GPT-like T5 model trained to generate text in multiple languages. ## Motivation - GPT models are expensive to run. - GPT models are monolingual. ## Solution - Maybe, Small Models aren't Terrible (*SMarT*) - Plus, they are cheaper to run. I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from various domains like tech, law, finance and science etc. to generate text, just like GPT models do. ## Usage - [NLPlayStore](https://github.com/flexudy/NLPlayStore) 👈 ```python from store.service_management import ServiceManager service_manager = ServiceManager().get_service("cheapity3") service.install() service = service.launch() input_text = "The mechanical engineering field requires ... " generated_texts = service.play(input_text, 15) # A list a generated text ``` ## Usage - Hugging Face Transformers 🤗 - Provide some text e.g `"Italy, officially the Italian Republic is a country consisting of"` - Tell Cheapity3 how many words you want to generate e.g `15` -- 😃 Yes, you can control the length. - Cheapity3 reads your text and generates a continuation containing approximately 15 words. ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("flexudy/cheapity3") model = AutoModelWithLMHead.from_pretrained("flexudy/cheapity3") input_text = """The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity. { _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ }""" # 15 words inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512) input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] outputs = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_length=128, do_sample=True, early_stopping=True, num_return_sequences=4, repetition_penalty=2.5 ) for i in range(4): print(tokenizer.decode(outputs[i], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` **INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.** ``` > Cheapity3 continues with beam search: ... The field of mechanical engineering is a broad field that includes many core areas of engineering. > Cheapity3 continues with sampling and top_k=50: ... Developing the knowledge base for these core areas will enable engineers to build their capabilities rapidly and efficiently. ... ... The field of mechanics offers a variety and broad range for applications throughout the engineering/technological fields. ... ... Mechanics generally is not understood by students. While they can be employed in the field, mechanical engineering ... ... Introduction to mechanical engineering and core fields including chemical products, materials science, structural analysis, and geomatics ... ``` ## Pretty decent right? Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue 🤗. ## Model Training FYI - T5-base model - Trained on ONLY 1M sentences from English, French and German text - Mostly text from Wikipedia, arxiv and QA datasets - Learning rate: 0.00003 - 2 epochs - Max input: 512 tokens - Max output: 128 tokens
SEISHIN/distilbert-base-uncased-finetuned-ner
SEISHIN
2021-12-27T07:53:05Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9289272666888077 - name: Recall type: recall value: 0.9386956035350711 - name: F1 type: f1 value: 0.933785889160917 - name: Accuracy type: accuracy value: 0.9842565968195466 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0605 - Precision: 0.9289 - Recall: 0.9387 - F1: 0.9338 - Accuracy: 0.9843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2388 | 1.0 | 878 | 0.0671 | 0.9162 | 0.9211 | 0.9187 | 0.9813 | | 0.0504 | 2.0 | 1756 | 0.0602 | 0.9225 | 0.9366 | 0.9295 | 0.9834 | | 0.0299 | 3.0 | 2634 | 0.0605 | 0.9289 | 0.9387 | 0.9338 | 0.9843 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
xkang/distilbert-base-uncased-finetuned-imdb
xkang
2021-12-27T07:30:09Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4717 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7096 | 1.0 | 157 | 2.4920 | | 2.5741 | 2.0 | 314 | 2.4237 | | 2.5386 | 3.0 | 471 | 2.4355 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3
lakahaga/novel_reading_tts
lakahaga
2021-12-26T17:45:00Z
0
4
espnet
[ "espnet", "audio", "text-to-speech", "ko", "dataset:novelspeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
text-to-speech
2022-03-02T23:29:05Z
--- tags: - espnet - audio - text-to-speech language: ko datasets: - novelspeech license: cc-by-4.0 --- ## ESPnet2 TTS model ### `lakahaga/novel_reading_tts` This model was trained by lakahaga using novelspeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 9827dfe37f69e8e55f902dc4e340de5108596311 pip install -e . cd egs2/novelspeech/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model lakahaga/novel_reading_tts ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_conformer_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 34177 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 1000 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 10 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 1000 batch_size: 20 valid_batch_size: null batch_bins: 25600000 valid_batch_bins: null train_shape_file: - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/text_shape.phn - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/speech_shape valid_shape_file: - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/text_shape.phn - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_no_dev/text - text - text - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/tr_no_dev/durations - durations - text_int - - dump/raw/tr_no_dev/wav.scp - speech - sound - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/pitch.scp - pitch - npy - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/energy.scp - energy - npy - - dump/raw/tr_no_dev/utt2sid - sids - text_int valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/dev/durations - durations - text_int - - dump/raw/dev/wav.scp - speech - sound - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/pitch.scp - pitch - npy - - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/energy.scp - energy - npy - - dump/raw/dev/utt2sid - sids - text_int allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - '=' - _ - A - Y - N - O - E - U - L - G - S - D - M - J - H - B - ZERO - TWO - C - . - Q - ',' - P - T - SEVEN - X - W - THREE - ONE - NINE - K - EIGHT - '@' - '!' - Z - '?' - F - SIX - FOUR - '#' - $ - + - '%' - FIVE - '~' - AND - '*' - '...' - '' - ^ - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: tacotron g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 encoder_type: conformer decoder_type: conformer conformer_pos_enc_layer_type: rel_pos conformer_self_attn_layer_type: rel_selfattn conformer_activation_type: swish use_macaron_style_in_conformer: true use_cnn_in_conformer: true conformer_enc_kernel_size: 7 conformer_dec_kernel_size: 31 init_type: xavier_uniform transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/energy_stats.npz required: - output_dir - token_list version: 0.10.5a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wilsontam/gpt2-dstc9
wilsontam
2021-12-26T14:02:23Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "dstc9", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: "en" tags: - dstc9 widget: - text: "Yes, I'm going to be in Chinatown, San Francisco and am looking" - text: "Can you find me one that is in the" --- This GPT2 model is trained using DSTC9 data for dialogue modeling purpose. Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset Credit: Jia-Chen Jason Gu, Wilson Tam
wilsontam/bert-base-uncased-dstc9
wilsontam
2021-12-26T14:00:21Z
4
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "dstc10", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "en" tags: - dstc10 widget: - text: "Can you accommodate large [MASK] ?" --- # Goal This Bert model is trained using DSTC9 training + validation data for dialogue modeling purpose. Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset Credit: Shuhan Yuan, Wilson Tam
mohammadtari/arxivinterface
mohammadtari
2021-12-26T02:18:42Z
4
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: t5_small_summarization_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5_small_summarization_model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
Ayham/xlmroberta_large_gpt2_summarization_cnndm
Ayham
2021-12-26T00:06:35Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: xlmroberta_large_gpt2_summarization_cnndm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmroberta_large_gpt2_summarization_cnndm This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Andry/1111
Andry
2021-12-25T20:04:09Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
C:\Users\andry\Desktop\Выжигание 24-12-2021.jpg
vanadhi/bert-base-uncased-fiqa-flm-sq-flit
vanadhi
2021-12-25T18:44:16Z
13
1
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-fiqa-flm-sq-flit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-fiqa-flm-sq-flit This model is a fine-tuned version of bert-base-uncased on a custom dataset created for question answering in financial domain. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. The model was further processed as below for the specific downstream QA task. 1. Pretrained for domain adaptation with Masked language modeling (MLM) objective with the FIQA challenge Opinion-based QA task is available here - https://drive.google.com/file/d/1BlWaV-qVPfpGyJoWQJU9bXQgWCATgxEP/view 2. Pretrained with MLM objective with custom generated dataset for Banking and Finance. 3. Fine Tuned with SQuAD V2 dataset for QA task adaptation. 4. Fine Tuned with custom labeled dataset in SQuAD format for domain and task adaptation. ## Intended uses & limitations The model is intended to be used for a custom Questions Answering system in the BFSI domain. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3