modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-05 06:27:31
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-05 06:26:36
card
stringlengths
11
1.01M
huggingtweets/nathanmarz
huggingtweets
2022-01-15T19:05:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/nathanmarz/1642273500624/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1068577679367127041/w7GXbl_e_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nathan Marz</div> <div style="text-align: center; font-size: 14px;">@nathanmarz</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nathan Marz. | Data | Nathan Marz | | --- | --- | | Tweets downloaded | 3188 | | Retweets | 459 | | Short tweets | 239 | | Tweets kept | 2490 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1zmjgvn2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nathanmarz's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rr35qq7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rr35qq7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/nathanmarz') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
pediberto/autonlp-testing-504313966
pediberto
2022-01-15T15:02:13Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:pediberto/autonlp-data-testing", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - pediberto/autonlp-data-testing co2_eq_emissions: 12.994518654810642 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 504313966 - CO2 Emissions (in grams): 12.994518654810642 ## Validation Metrics - Loss: 0.19673296809196472 - Accuracy: 0.9398032027783138 - Precision: 0.9133115705476967 - Recall: 0.9718255499807025 - AUC: 0.985316873222122 - F1: 0.9416604338070308 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pediberto/autonlp-testing-504313966 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Ifromspace/GRIEFSOFT
Ifromspace
2022-01-15T13:06:43Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "PyTorch", "Transformers", "4ulan", "ru", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: - ru tags: - PyTorch - Transformers - 4ulan --- **Fork of https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2** Забавное для дискордика))00)) ROADMAP: - Собираю датасетик из книжек про попаданцев. <------------------------- Сейчас тут. - Дообучаю. - Выбрасываю в дискордик. https://discord.gg/HpeadKH
Huertas97/es_roberta_base_bne_leetspeak_ner
Huertas97
2022-01-15T11:55:46Z
4
1
spacy
[ "spacy", "token-classification", "es", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - spacy - token-classification language: - es license: apache-2.0 widget: - text: "La C0v!d es un 3ng@ño de los G0b!3rno$" example_title: "Word camouflage detection" model-index: - name: es_roberta_base_bne_leetspeak_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8979055626 - name: NER Recall type: recall value: 0.9393701406 - name: NER F Score type: f_score value: 0.9181699547 --- | Feature | Description | | --- | --- | | **Name** | `es_roberta_base_bne_leetspeak_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model a transformer-based masked language model for the Spanish language pre-trained with a total of 570GB of clean and deduplicated text compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders| | **License** | Apache 2.0 | | **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 91.82 | | `ENTS_P` | 89.79 | | `ENTS_R` | 93.94 | | `TRANSFORMER_LOSS` | 166484.92 | | `NER_LOSS` | 318457.35 |
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3
husnu
2022-01-15T07:25:53Z
25
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) on the turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3911 | 1.0 | 1281 | 1.4900 | | 0.9058 | 2.0 | 2562 | 1.3471 | | 0.6747 | 3.0 | 3843 | 1.4724 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/autosport-formulaoneworld-speedcafe
huggingtweets
2022-01-15T03:24:30Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/autosport-formulaoneworld-speedcafe/1642217065882/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1192531689060200448/S9KoiehJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1294927107605356544/CVXTlp9y_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1468895545007775746/NIWzzmye_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Speedcafe.com & Formula One World & Autosport</div> <div style="text-align: center; font-size: 14px;">@autosport-formulaoneworld-speedcafe</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Speedcafe.com & Formula One World & Autosport. | Data | Speedcafe.com | Formula One World | Autosport | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3247 | 3250 | | Retweets | 0 | 2778 | 52 | | Short tweets | 3 | 178 | 15 | | Tweets kept | 3247 | 291 | 3183 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kcn72bl0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @autosport-formulaoneworld-speedcafe's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fq703qs) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fq703qs/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/autosport-formulaoneworld-speedcafe') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/f1
huggingtweets
2022-01-15T02:57:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/f1/1642215447713/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1385670642327040001/Z5LaCXJI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Formula 1</div> <div style="text-align: center; font-size: 14px;">@f1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Formula 1. | Data | Formula 1 | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 157 | | Short tweets | 35 | | Tweets kept | 3058 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tsp2kk9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @f1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vu2nlz5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vu2nlz5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/f1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
minimaxir/ai-generated-pokemon-rudalle
minimaxir
2022-01-15T01:41:47Z
0
15
null
[ "pytorch", "rudalle", "pokemon", "image-generation", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - en tags: - rudalle - pokemon - image-generation license: mit --- # ai-generated-pokemon-rudalle ![](example.png) A finetuned [ruDALL-E](https://github.com/sberbank-ai/ru-dalle) on Pokémon using the finetuning example Colab Notebook [linked in that repo](https://colab.research.google.com/drive/1Tb7J4PvvegWOybPfUubl5O7m5I24CBg5?usp=sharing). This model was used to create Pokémon that resulted in AI-Generated Pokémon that went viral ([10k+ retweets](https://twitter.com/minimaxir/status/1470913487085785089) on Twitter + [30k+ upvotes](https://www.reddit.com/r/pokemon/comments/rgmyxp/i_trained_an_ai_on_all_the_official_pokemon/) on Reddit) The model used above was trained for 12 epochs (4.5 hours on a P100), at a max learning rate of `1e-5`. ## Demo You can play with this model using [this Colab Notebook](https://colab.research.google.com/drive/1A3t2gQofQGeXo5z1BAr1zqYaqVg3czKd?usp=sharing). ## License MIT
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6
husnu
2022-01-14T20:57:15Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-6 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3828 | 1.0 | 1845 | 1.7946 | | 1.5827 | 2.0 | 3690 | 1.4123 | | 1.404 | 3.0 | 5535 | 1.3142 | | 1.346 | 4.0 | 7380 | 1.2819 | | 1.2871 | 5.0 | 9225 | 1.2630 | | 1.2538 | 6.0 | 11070 | 1.2578 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
erwanlc/t5-coktails_recipe-small
erwanlc
2022-01-14T14:32:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-coktails_recipe-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-coktails_recipe-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
vachonni/wav2vec2-large-xls-r-300m-da-colab
vachonni
2022-01-14T12:14:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-da-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-da-colab This model is a fine-tuned version of [Alvenir/wav2vec2-base-da](https://huggingface.co/Alvenir/wav2vec2-base-da) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
lewtun/distilbert-base-uncased-finetuned-emotion-test-01
lewtun
2022-01-14T10:29:26Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-test-01 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.39 - name: F1 type: f1 value: 0.21884892086330932 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
LACAI/DialoGPT-large-PFG
LACAI
2022-01-14T05:18:30Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
Base model: [microsoft/DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large) Fine tuned for dialogue response generation on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019) Three additional special tokens were added during the fine-tuning process: - <|pad|> padding token - <|user|> speaker control token to prompt user responses - <|system|> speaker control token to prompt system responses The following Dialogues were excluded: - Those with donation amounts outside of the task range of [$0, $2]. - Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue. - Those with more than 800 words. Stats: - Training set: 519 dialogues - Validation set: 58 dialogues - ~20 utterances per dialogue
jiobiala24/wav2vec2-base-checkpoint-3
jiobiala24
2022-01-14T02:59:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-3 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-2](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-2) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7007 - Wer: 0.5514 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.358 | 14.8 | 400 | 1.4841 | 0.5338 | | 0.1296 | 29.62 | 800 | 1.7007 | 0.5514 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3
husnu
2022-01-14T00:17:31Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6088 | 1.0 | 5533 | 1.4429 | | 1.3928 | 2.0 | 11066 | 1.3183 | | 1.3059 | 3.0 | 16599 | 1.2864 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Suzana/new-york-tokyo-london
Suzana
2022-01-13T17:53:58Z
70
5
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: new-york-tokyo-london results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9104477763175964 --- # new-york-tokyo-london Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### London ![London](images/London.jpg) #### New York ![New York](images/New_York.jpg) #### Tokyo ![Tokyo](images/Tokyo.jpg)
flax-community/pino-bigbird-roberta-base
flax-community
2022-01-13T15:29:26Z
34
2
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "big_bird", "fill-mask", "nl", "dataset:mC4", "dataset:Dutch_news", "arxiv:2007.14062", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: nl datasets: - mC4 - Dutch_news --- # Pino (Dutch BigBird) base model Created by [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) & [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) (Not finished yet) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on Dutch language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdModel # by default its in `block_sparse` mode with num_random_blocks=3, block_size=64 model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base") # you can change `attention_type` to full attention like this: model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", block_size=16, num_random_blocks=2) ``` ## Training Data This model is pre-trained on four publicly available datasets: **mC4**, and scraped **Dutch news** from NRC en Nu.nl. It uses the the fast universal Byte-level BPE (BBPE) in contrast to the sentence piece tokenizer and vocabulary as RoBERTa (which is in turn borrowed from GPT2). ## Training Procedure The data is cleaned as follows: Remove texts containing HTML codes / javascript codes / loremipsum / policies Remove lines without end mark. Remove too short texts, words Remove too long texts, words Remove bad words ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
keras-io/time-series-anomaly-detection-autoencoder
keras-io
2022-01-13T14:52:51Z
14
13
tf-keras
[ "tf-keras", "autoencoder", "time series", "anomaly detection", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - autoencoder - time series - anomaly detection license: - cc0-1.0 --- ## Keras Implementation of time series anomaly detection using an Autoencoder ⌛ This repo contains the model and the notebook [for this time series anomaly detection implementation of Keras](https://keras.io/examples/timeseries/timeseries_anomaly_detection/). Full credits to: [Pavithra Vijay](https://github.com/pavithrasv) ## Background Information This notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data.
keras-io/simple-mnist-convnet
keras-io
2022-01-13T14:52:44Z
2
0
tf-keras
[ "tf-keras", "lstm", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - lstm license: - cc0-1.0 --- ## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣ This repo contains the model and the notebook [on Simple MNIST convnet](https://keras.io/examples/vision/mnist_convnet/). Full credits to: [François Chollet](https://github.com/fchollet)
espnet/vectominist_seame_asr_conformer_bpe5626
espnet
2022-01-13T14:49:52Z
1
1
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "zh", "multilingual", "dataset:seame", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: - en - zh - multilingual datasets: - seame license: cc-by-4.0 --- ## ESPnet2 ASR pretrained model ### `https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1` ♻️ Imported from https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1 This model was trained by vectominist using seame/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nielsr/tapex-large-finetuned-sqa
nielsr
2022-01-13T14:41:16Z
7
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2107.07653", "license:apache-2.0", "autotrain_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapex - table-question-answering license: apache-2.0 datasets: - msr_sqa inference: false --- TAPEX-large model fine-tuned on SQA. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-sqa") model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-sqa") # create table data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) # turn into dict table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]} # turn into format TAPEX expects # define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py linearizer = IndexedRowTableLinearize() linear_table = linearizer.process_table(table_dict) # add question question = "how many movies does George Clooney have?" joint_input = question + " " + linear_table # encode encoding = tokenizer(joint_input, return_tensors="pt") # forward pass outputs = model.generate(**encoding) # decode tokenizer.batch_decode(outputs, skip_special_tokens=True) ```
anirudh21/xlnet-base-cased-finetuned-wnli
anirudh21
2022-01-13T13:52:38Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: xlnet-base-cased-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-wnli This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6874 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.7209 | 0.5352 | | No log | 2.0 | 80 | 0.6874 | 0.5634 | | No log | 3.0 | 120 | 0.6908 | 0.5634 | | No log | 4.0 | 160 | 0.6987 | 0.4930 | | No log | 5.0 | 200 | 0.6952 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/hazuma
huggingtweets
2022-01-13T09:23:08Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hazuma/1642065783369/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322114245467598850/pz_yTcye_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">東浩紀 Hiroki Azuma</div> <div style="text-align: center; font-size: 14px;">@hazuma</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 東浩紀 Hiroki Azuma. | Data | 東浩紀 Hiroki Azuma | | --- | --- | | Tweets downloaded | 3230 | | Retweets | 1492 | | Short tweets | 1560 | | Tweets kept | 178 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ig7ewkg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hazuma's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uix46e5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uix46e5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hazuma') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/tsuda
huggingtweets
2022-01-13T08:46:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/tsuda/1642063525628/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1433345543963508738/qEUwKlFD_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">津田大介</div> <div style="text-align: center; font-size: 14px;">@tsuda</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 津田大介. | Data | 津田大介 | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 2873 | | Short tweets | 227 | | Tweets kept | 144 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/o0sc3rb4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tsuda's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qjnl0op) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qjnl0op/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tsuda') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/h_ototake-hirox246-ochyai
huggingtweets
2022-01-13T07:45:50Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/h_ototake-hirox246-ochyai/1642059945521/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1072419376668782597/hhmhNVER_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1481142443068198912/NCrXoLUB_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & 落合陽一 Yoichi OCHIAI & 乙武 洋匡</div> <div style="text-align: center; font-size: 14px;">@h_ototake-hirox246-ochyai</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & 落合陽一 Yoichi OCHIAI & 乙武 洋匡. | Data | ひろゆき, Hiroyuki Nishimura | 落合陽一 Yoichi OCHIAI | 乙武 洋匡 | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3240 | 3238 | | Retweets | 281 | 2238 | 1259 | | Short tweets | 1980 | 574 | 1437 | | Tweets kept | 987 | 428 | 542 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k39l31f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @h_ototake-hirox246-ochyai's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1d9okxed) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1d9okxed/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/h_ototake-hirox246-ochyai') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/elonmusk-hirox246-hitoshinagai1
huggingtweets
2022-01-13T07:16:46Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1474910968157249536/FS8-70Ie_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1015469378777706496/WqKzDTb3_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & ひろゆき, Hiroyuki Nishimura & 永井均</div> <div style="text-align: center; font-size: 14px;">@elonmusk-hirox246-hitoshinagai1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & ひろゆき, Hiroyuki Nishimura & 永井均. | Data | Elon Musk | ひろゆき, Hiroyuki Nishimura | 永井均 | | --- | --- | --- | --- | | Tweets downloaded | 2022 | 3248 | 3245 | | Retweets | 95 | 281 | 53 | | Short tweets | 598 | 1980 | 3056 | | Tweets kept | 1329 | 987 | 136 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dzgeuwp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-hirox246-hitoshinagai1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12mhdct8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12mhdct8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-hirox246-hitoshinagai1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ju-bezdek/slovakbert-conll2003-sk-ner
ju-bezdek
2022-01-12T20:37:34Z
9
1
transformers
[ "transformers", "pytorch", "generated_from_trainer", "dataset:ju-bezdek/conll2003-SK-NER", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - ju-bezdek/conll2003-SK-NER metrics: - precision - recall - f1 - accuracy model-index: - name: outputs results: - task: name: Token Classification type: token-classification dataset: name: ju-bezdek/conll2003-SK-NER type: ju-bezdek/conll2003-SK-NER args: conll2003-SK-NER metrics: - name: Precision type: precision value: 0.8189727994593682 - name: Recall type: recall value: 0.8389581169955002 - name: F1 type: f1 value: 0.8288450029922203 - name: Accuracy type: accuracy value: 0.9526157920337243 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on the [ju-bezdek/conll2003-SK-NER](https://huggingface.co/datasets/ju-bezdek/conll2003-SK-NER) dataset. It achieves the following results on the evaluation (validation) set: - Loss: 0.1752 - Precision: 0.8190 - Recall: 0.8390 - F1: 0.8288 - Accuracy: 0.9526 ## Model description More information needed ## Code example ```python: from transformers import pipeline, AutoModel, AutoTokenizer from spacy import displacy import os model_path="ju-bezdek/slovakbert-conll2003-sk-ner" aggregation_strategy="max" ner_pipeline = pipeline(task='ner', model=model_path, aggregation_strategy=aggregation_strategy) input_sentence= "Ruský premiér Viktor Černomyrdin v piatok povedal, že prezident Boris Jeľcin , ktorý je na dovolenke mimo Moskvy , podporil mierový plán šéfa bezpečnosti Alexandra Lebedu pre Čečensko, uviedla tlačová agentúra Interfax" ner_ents = ner_pipeline(input_sentence) print(ner_ents) ent_group_labels = [ner_pipeline.model.config.id2label[i][2:] for i in ner_pipeline.model.config.id2label if i>0] options = {"ents":ent_group_labels} dicplacy_ents = [{"start":ent["start"], "end":ent["end"], "label":ent["entity_group"]} for ent in ner_ents] displacy.render({"text":input_sentence, "ents":dicplacy_ents}, style="ent", options=options, jupyter=True, manual=True) ``` ### Result: <div> <span class="tex2jax_ignore"><div class="entities" style="line-height: 2.5; direction: ltr"> <mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Ruský <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">MISC</span> </mark> premiér <mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Viktor Černomyrdin <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span> </mark> v piatok povedal, že prezident <mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Boris Jeľcin, <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span> </mark> , ktorý je na dovolenke mimo <mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Moskvy <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span> </mark> , podporil mierový plán šéfa bezpečnosti <mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Alexandra Lebedu <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span> </mark> pre <mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Čečensko, <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span> </mark> uviedla tlačová agentúra <mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;"> Interfax <span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">ORG</span> </mark> </div></span> </div> ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3237 | 1.0 | 878 | 0.2541 | 0.7125 | 0.8059 | 0.7563 | 0.9283 | | 0.1663 | 2.0 | 1756 | 0.2370 | 0.7775 | 0.8090 | 0.7929 | 0.9394 | | 0.1251 | 3.0 | 2634 | 0.2289 | 0.7732 | 0.8029 | 0.7878 | 0.9385 | | 0.0984 | 4.0 | 3512 | 0.2818 | 0.7294 | 0.8189 | 0.7715 | 0.9294 | | 0.0808 | 5.0 | 4390 | 0.3138 | 0.7615 | 0.7900 | 0.7755 | 0.9326 | | 0.0578 | 6.0 | 5268 | 0.3072 | 0.7548 | 0.8222 | 0.7871 | 0.9370 | | 0.0481 | 7.0 | 6146 | 0.2778 | 0.7897 | 0.8156 | 0.8025 | 0.9408 | | 0.0414 | 8.0 | 7024 | 0.3336 | 0.7695 | 0.8201 | 0.7940 | 0.9389 | | 0.0268 | 9.0 | 7902 | 0.3294 | 0.7868 | 0.8140 | 0.8002 | 0.9409 | | 0.0204 | 10.0 | 8780 | 0.3693 | 0.7657 | 0.8239 | 0.7938 | 0.9376 | | 0.016 | 11.0 | 9658 | 0.3816 | 0.7932 | 0.8242 | 0.8084 | 0.9425 | | 0.0108 | 12.0 | 10536 | 0.3607 | 0.7929 | 0.8256 | 0.8089 | 0.9431 | | 0.0078 | 13.0 | 11414 | 0.3980 | 0.7915 | 0.8240 | 0.8074 | 0.9423 | | 0.0062 | 14.0 | 12292 | 0.4096 | 0.7995 | 0.8247 | 0.8119 | 0.9436 | | 0.0035 | 15.0 | 13170 | 0.4177 | 0.8006 | 0.8251 | 0.8127 | 0.9438 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
vinaydngowda/Robertabase_Ana4
vinaydngowda
2022-01-12T20:12:16Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:vinaydngowda/autonlp-data-case-classify-xlnet", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - vinaydngowda/autonlp-data-case-classify-xlnet co2_eq_emissions: 19.964760910364927 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 496213536 - CO2 Emissions (in grams): 19.964760910364927 ## Validation Metrics - Loss: 0.7149562835693359 - Accuracy: 0.8092592592592592 - Macro F1: 0.8085189591849891 - Micro F1: 0.8092592592592593 - Weighted F1: 0.8085189591849888 - Macro Precision: 0.8137745564384112 - Micro Precision: 0.8092592592592592 - Weighted Precision: 0.8137745564384112 - Macro Recall: 0.8092592592592592 - Micro Recall: 0.8092592592592592 - Weighted Recall: 0.8092592592592592 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/vinaydngowda/autonlp-case-classify-xlnet-496213536 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Katiejdarby/test1
Katiejdarby
2022-01-12T18:36:31Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
this is a test. How do you write a paper?
Jainil30/wav2vec2-base-csa-10-rev3
Jainil30
2022-01-12T14:55:33Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-csa-10-rev3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-csa-10-rev3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5869 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 18.7934 | 25.0 | 200 | 3.5869 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/prof_preobr
huggingtweets
2022-01-12T10:06:59Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/853613144832446464/VrGXs0NZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Проф. Преображенский</div> <div style="text-align: center; font-size: 14px;">@prof_preobr</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Проф. Преображенский. | Data | Проф. Преображенский | | --- | --- | | Tweets downloaded | 3224 | | Retweets | 567 | | Short tweets | 61 | | Tweets kept | 2596 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12xdr90k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prof_preobr's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vqtap5s) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vqtap5s/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/prof_preobr') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
GusNicho/roberta-base-finetuned
GusNicho
2022-01-12T08:31:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.4057 - eval_runtime: 3.7087 - eval_samples_per_second: 167.712 - eval_steps_per_second: 2.696 - epoch: 2.11 - step: 2053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
gpssohi/distilbart-qgen-3-3
gpssohi
2022-01-12T08:29:26Z
14
3
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "question-generation", "summarization", "en", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - question-generation - summarization license: apache-2.0 datasets: - squad --- # Introduction This model checkpoint is obtained by first fine-tuning the sshleifer/distilbart-cnn-6-6 summarization checkpoint on the SQuAD dataset. After this, the 6-6 fine-tuned model is distilled down to a 3-3 model which gives us the final checkpoint. [GitHub Link for training scripts.](https://github.com/darth-c0d3r/bart-question-generation) # Usage The input format is as follows: `[answer] <s> [passage]`. The model will predict the question that corresponds to the answer from the passage. # Plot ![Distillation Run](distill_run_21.png) # Dataset The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Hence, the input to the model will be a passage context and an answer, and the output / target will be the question for the given answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chat-bots to lead a conversation. The final dataset is created by taking the union of the following Question Answering Datasets. The dataset must have the following three columns: context, question, answer. ## [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd-workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. We use the SQuAD 1.1 variant which does not have unanswerable questions. So, every question will have a corresponding answer and vice-versa. ### Preprocessing The first step is to remove questions which don't have answers. After that, we split the train set into Train and Eval sets and treat the dev set as the test set. ### Stats **Original Dataset** | Split | Num Docs | Num Contexts | Ques w/ Ans | Ques w/o Ans | Num Unique Ans | | ----- | -------- | ------------ | ----------- | ------------ | -------------- | | Train | 442 | 19035 | 86821 | 43498 | 86821 | | Dev | 35 | 1204 | 5928 | 5945 | 10279 | **After Preprocessing** | Split | Num Rows | Context | Answer | Question | | ----- | -------- | ---------- | ------ | -------- | | Train | 80995 | 653,120,20 | 43,3,1 | 40,10,1 | | Eval | 5826 | 445,123,67 | 28,3,1 | 29,10,3 | | Test | 10297 | 629,129,25 | 29,4,1 | 31,10,3 | The numbers in the columns indicate max, avg, min number of words.
anirudh21/distilbert-base-uncased-finetuned-cola
anirudh21
2022-01-12T07:24:56Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5224154837835395 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Matthews Correlation: 0.5224 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 535 | 0.5223 | 0.4007 | | 0.3515 | 2.0 | 1070 | 0.5150 | 0.4993 | | 0.2391 | 3.0 | 1605 | 0.6471 | 0.5103 | | 0.1841 | 4.0 | 2140 | 0.7640 | 0.5153 | | 0.1312 | 5.0 | 2675 | 0.8623 | 0.5224 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ncduy/opus-mt-en-vi-full-finetuned-en-to-vi
ncduy
2022-01-12T07:10:14Z
8
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: opus-mt-en-vi-full-finetuned-en-to-vi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-vi-full-finetuned-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 212 - eval_batch_size: 212 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.17.0 - Tokenizers 0.10.3
anirudh21/distilbert-base-uncased-finetuned-wnli
anirudh21
2022-01-12T06:16:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6883 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6883 | 0.5634 | | No log | 2.0 | 80 | 0.6934 | 0.5634 | | No log | 3.0 | 120 | 0.6960 | 0.5211 | | No log | 4.0 | 160 | 0.6958 | 0.5634 | | No log | 5.0 | 200 | 0.6964 | 0.5634 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
mrm8488/longformer-base-4096-spanish-finetuned-squad
mrm8488
2022-01-11T20:39:06Z
11
6
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "Long documents", "LongFormer", "QA", "Q&A", "es", "dataset:BSC-TeMU/SQAC", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es tags: - Long documents - LongFormer - QA - Q&A datasets: - BSC-TeMU/SQAC --- # Spanish Longformer fine-tuned on **SQAC** for Spanish **QA** 📖❓ [longformer-base-4096-spanish](https://huggingface.co/mrm8488/longformer-base-4096-spanish) fine-tuned on [SQAC](https://huggingface.co/datasets/BSC-TeMU/SQAC) for **Q&A** downstream task. ## Details of the model 🧠 [longformer-base-4096-spanish](https://huggingface.co/mrm8488/longformer-base-4096-spanish) is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to **4,096**! ## Details of the dataset 📚 This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment. The sources of the contexts are: * Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode). * News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/). * Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode). This dataset can be used to build extractive-QA. ## Evaluation Metrics 📈 TBA ## Fast Usage with HF `pipeline` 🧪 ```py from transformers import pipeline qa_pipe = pipeline("question-answering", model='mrm8488/longformer-base-4096-spanish-finetuned-squad') context = ''' Hace aproximadamente un año, Hugging Face, una startup de procesamiento de lenguaje natural con sede en Brooklyn, Nueva York, lanzó BigScience, un proyecto internacional con más de 900 investigadores que está diseñado para comprender mejor y mejorar la calidad de los grandes modelos de lenguaje natural. Los modelos de lenguaje grande (LLM), algoritmos que pueden reconocer, predecir y generar lenguaje sobre la base de conjuntos de datos basados ​​en texto, han captado la atención de empresarios y entusiastas de la tecnología por igual. Pero el costoso hardware requerido para desarrollar LLM los ha mantenido en gran medida fuera del alcance de los investigadores sin los recursos de compañías como OpenAI y DeepMind detrás de ellos. Inspirándose en organizaciones como la Organización Europea para la Investigación Nuclear (también conocida como CERN) y el Gran Colisionador de Hadrones, el objetivo de BigScience es crear LLM y grandes conjuntos de datos de texto que eventualmente serán de código abierto para la IA más amplia. comunidad. Los modelos serán entrenados en la supercomputadora Jean Zay ubicada cerca de París, Francia, que se encuentra entre las máquinas más poderosas del mundo. ''' question = "¿Cuál es el objetivo de BigScience?" qa_pipe({'context':context, 'question': question}) # It outpus ``` ```js {'answer': 'comprender mejor y mejorar la calidad de los grandes modelos de lenguaje natural.', 'end': 305, 'score': 0.9999799728393555, 'start': 224} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
ThePixOne/retBERT
ThePixOne
2022-01-11T18:24:24Z
8
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
BERT finetuned on wallstreetbets subreddit
avichr/heBERT_NER
avichr
2022-01-11T17:00:46Z
4,122
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "arxiv:1810.04805", "arxiv:2102.01909", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition <img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250"> HeBERT is a Hebrew pretrained language model. It is based on [Google's BERT](https://arxiv.org/abs/1810.04805) architecture and it is BERT-Base config. <br> HeBert was trained on three dataset: 1. A Hebrew version of [OSCAR](https://oscar-corpus.com/): ~9.8 GB of data, including 1 billion words and over 20.8 millions sentences. 2. A Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/): ~650 MB of data, including over 63 millions words and 3.8 millions sentences 3. Emotion User Generated Content (UGC) data that was collected for the purpose of this study (described below). ## Named-entity recognition (NER) The ability of the model to classify named entities in text, such as persons' names, organizations, and locations; tested on a labeled dataset from [Ben Mordecai and M Elhadad (2005)](https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/), and evaluated with F1-score. ### How to use ``` from transformers import pipeline # how to use? NER = pipeline( "token-classification", model="avichr/heBERT_NER", tokenizer="avichr/heBERT_NER", ) NER('דויד לומד באוניברסיטה העברית שבירושלים') ``` ## Other tasks [**Emotion Recognition Model**](https://huggingface.co/avichr/hebEMO_trust). An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing) <br> [**Sentiment Analysis**](https://huggingface.co/avichr/heBERT_sentiment_analysis). <br> [**masked-LM model**](https://huggingface.co/avichr/heBERT) (can be fine-tunned to any down-stream task). ## Contact us [Avichay Chriqui](mailto:[email protected]) <br> [Inbal yahav](mailto:[email protected]) <br> The Coller Semitic Languages AI Lab <br> Thank you, תודה, شكرا <br> ## If you used this model please cite us as : Chriqui, A., & Yahav, I. (2021). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. arXiv preprint arXiv:2102.01909. ``` @article{chriqui2021hebert, title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition}, author={Chriqui, Avihay and Yahav, Inbal}, journal={arXiv preprint arXiv:2102.01909}, year={2021} } ``` [git](https://github.com/avichaychriqui/HeBERT)
alaggung/bart-r3f
alaggung
2022-01-11T16:18:32Z
123
6
transformers
[ "transformers", "pytorch", "tf", "bart", "text2text-generation", "summarization", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - ko tags: - summarization widget: - text: "[BOS]밥 ㄱ?[SEP]고고고고 뭐 먹을까?[SEP]어제 김치찌개 먹어서 한식말고 딴 거[SEP]그럼 돈까스 어때?[SEP]오 좋다 1시 학관 앞으로 오셈[SEP]ㅇㅋ[EOS]" inference: parameters: max_length: 64 top_k: 5 --- # BART R3F [2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다. [bart-pretrained](https://huggingface.co/alaggung/bart-pretrained) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다. 데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
uw-madison/nystromformer-512
uw-madison
2022-01-11T14:13:39Z
1,365
2
transformers
[ "transformers", "pytorch", "nystromformer", "fill-mask", "arxiv:2102.03902", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Nyströmformer Nyströmformer model for masked language modeling (MLM) pretrained on BookCorpus and English Wikipedia for sequence length 512. ## About Nyströmformer The Nyströmformer model was proposed in [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. The abstract from the paper is the following: Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences — a topic being actively studied in the community. To address this limitation, we propose Nyströmformer — a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL. ## Usage ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uw-madison/nystromformer-512') >>> unmasker("Paris is the [MASK] of France.") [{'score': 0.829957902431488, 'token': 1030, 'token_str': 'capital', 'sequence': 'paris is the capital of france.'}, {'score': 0.022157637402415276, 'token': 16081, 'token_str': 'birthplace', 'sequence': 'paris is the birthplace of france.'}, {'score': 0.01904447190463543, 'token': 197, 'token_str': 'name', 'sequence': 'paris is the name of france.'}, {'score': 0.017583081498742104, 'token': 1107, 'token_str': 'kingdom', 'sequence': 'paris is the kingdom of france.'}, {'score': 0.005948934704065323, 'token': 148, 'token_str': 'city', 'sequence': 'paris is the city of france.'}] ```
Humair/all-mpnet-base-v2-finetuned-v2
Humair
2022-01-11T12:26:56Z
13
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Humair/all-mpnet-base-v2-finetuned-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Humair/all-mpnet-base-v2-finetuned-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2') model = AutoModel.from_pretrained('Humair/all-mpnet-base-v2-finetuned-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Humair/all-mpnet-base-v2-finetuned-v2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
andreiliphdpr/distilbert-base-uncased-finetuned-cola
andreiliphdpr
2022-01-11T12:11:00Z
5
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: andreiliphdpr/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # andreiliphdpr/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0015 - Train Accuracy: 0.9995 - Validation Loss: 0.0570 - Validation Accuracy: 0.9915 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0399 | 0.9870 | 0.0281 | 0.9908 | 0 | | 0.0182 | 0.9944 | 0.0326 | 0.9901 | 1 | | 0.0089 | 0.9971 | 0.0396 | 0.9912 | 2 | | 0.0040 | 0.9987 | 0.0486 | 0.9918 | 3 | | 0.0015 | 0.9995 | 0.0570 | 0.9915 | 4 | ### Framework versions - Transformers 4.15.0.dev0 - TensorFlow 2.6.2 - Datasets 1.15.1 - Tokenizers 0.10.3
flax-community/t5-base-dutch
flax-community
2022-01-11T12:10:22Z
32
4
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "seq2seq", "lm-head", "dataset:yhavinga/mc4_nl_cleaned", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - dutch tags: - seq2seq - lm-head datasets: - yhavinga/mc4_nl_cleaned license: apache-2.0 inference: false --- # t5-base-dutch Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) & [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google, for the project [Pre-train T5 from scratch in Dutch](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-dutch/8109). See also the fine-tuned [t5-base-dutch-demo](https://huggingface.co/flax-community/t5-base-dutch-demo) model, and the demo application **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)**, that are based on this model. **5 jan 2022: Model updated. Evaluation accuracy increased from 0.64 to 0.70.** **11 jan 2022: See also [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) with eval acc 0.78** ## Model * Configuration based on `google/t5-base` * 12 layers, 12 heads * Dropout set to 0.1 ## Dataset This model was trained on the `full` configuration of [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. ## Tokenization A SentencePiece tokenizer was trained from scratch on this dataset. The total tokens of the `full` configuration is 34B ## Training The model was trained on the `full` mc4_nl_cleaned dataset configuration for 1 epoch, consisting of 34B tokens, for 528 482 steps with a batch size of 128 and took 57 hours. A triangle learning rate schedule was used, with peak learning rate 0.005. ## Evaluation * Loss: 1.38 * Accuracy: 0.70
ncduy/opus-mt-en-vi-own-finetuned-en-to-vi
ncduy
2022-01-11T09:21:10Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-en-vi-own-finetuned-en-to-vi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-vi-own-finetuned-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.4416 - Bleu: 2.1189 - Gen Len: 25.153 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 6.2513 | 1.0 | 1563 | 6.0147 | 0.7038 | 29.165 | | 5.7184 | 2.0 | 3126 | 5.5631 | 1.9803 | 23.915 | | 5.5248 | 3.0 | 4689 | 5.4416 | 2.1189 | 25.153 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
moumeneb1/testing
moumeneb1
2022-01-11T09:16:45Z
5
0
speechbrain
[ "speechbrain", "wav2vec2", "CTC", "Attention", "pytorch", "Transformer", "automatic-speech-recognition", "rw", "dataset:commonvoice", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: "rw" thumbnail: pipeline_tag: automatic-speech-recognition tags: - CTC - Attention - pytorch - speechbrain - Transformer license: "apache-2.0" datasets: - commonvoice metrics: - wer - cer --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:--------------:|:--------------:| :--------:| | 03-06-21 | 18.91 | 2xV100 32GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (RW). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En. The obtained final acoustic representation is given to the CTC and attention decoders. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Kinyarwanda) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
activatepin/RC_News
activatepin
2022-01-11T07:40:27Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://www.guilded.gg/thisiscineplex/overview/news/XRz48Dr6 https://www.guilded.gg/FLIXmasGR/overview/news/7R0WorPy https://www.guilded.gg/FLIXmasGR/overview/news/NyE5BPmy https://www.guilded.gg/FLIXmasGR/overview/news/2l3Konal https://www.guilded.gg/FLIXmasGR/overview/news/AykDjvVR https://www.guilded.gg/FLIXmasGR/overview/news/16YOGQoR https://www.guilded.gg/FLIXmasGR/overview/news/KR2ngpXR https://www.guilded.gg/FLIXmasGR/overview/news/xypa2qZR https://www.guilded.gg/FLIXmasGR/overview/news/A6jZGQk6 https://www.guilded.gg/FLIXmasGR/overview/news/1ROQVMe6 https://www.guilded.gg/FLIXmasGR/overview/news/4yAW0Kvl https://www.guilded.gg/FLIXmasGR/overview/news/JlaoGQBy https://www.guilded.gg/FLIXmasGR/overview/news/YyrPnVEl https://www.guilded.gg/FLIXmasGR/overview/news/4lGz3aBR https://www.guilded.gg/FLIXmasGR/overview/news/16nKkj1y https://www.guilded.gg/FLIXmasGR/overview/news/X6QA0Ng6 https://www.guilded.gg/FLIXmasGR/overview/news/XRz4xGa6 https://www.guilded.gg/FLIXmasGR/overview/news/PlqV9826 https://www.guilded.gg/FLIXmasGR/overview/news/7R0WokWy https://www.guilded.gg/FLIXmasGR/overview/news/qlDvK4dy https://www.guilded.gg/FLIXmasGR/overview/news/2l3KopZl https://www.guilded.gg/FLIXmasGR/overview/news/16YOGj4R https://www.guilded.gg/FLIXmasGR/overview/news/4ldxGzQl
Nasvai1702/Night
Nasvai1702
2022-01-11T02:14:52Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
Говорили: "Погоди", уходил с дождём Эта ночь нужна, переваривал сон Вы порвали паруса, ожидая восторг Это мой Тачтаун, это мой Гонконг Надо созерцать, и не более того Либо до конца переполох Хитроматы пустот, наливай по сто Забывай мой голос и меня самого Забывай мой рай, я пропитый бадман Добровольно приговаривал, а вам по делом Заливал до дна, дабы дать по щам Не хочу себя жалеть, и не буду прощать Этот мир не смог меня сохранить Потеряли головы, теряя нить Во время дабы любить без обид и жить Не забыть нам бед, и незачем творить Ночи в одного, ночи в одного Холили, лелеяли убитого меня собой Ночи в одного, ночи в одного Верили в меня, как никогда, никто и ни в кого Ночи в одного, ночи в одного Холили, лелеяли убитого меня собой Ночи в одного, ночи в одного Верили в меня, как никогда, никто и ни в кого
tscholak/2e826ioa
tscholak
2022-01-10T21:50:39Z
9
7
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text2sql", "en", "dataset:cosql", "dataset:spider", "arxiv:2109.05093", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab" tags: - text2sql widget: - "And the concert named Auditions? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : sing er_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name ( Super bootcamp, Auditions ), theme, stadium_id, year | singer_in_concert : concert_id, singer_id || Which year did the concert Super bootcamp happen in? | Find the name and location of the stadiums which some concerts happened in the years of both 2014 and 2015." - "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id" license: "apache-2.0" datasets: - cosql - spider metrics: - cosql --- ## tscholak/2e826ioa Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [T5-3B](https://huggingface.co/t5-3b). ### Training Data The model has been fine-tuned on the 2,164 training dialogues in the [CoSQL SQL-grounded dialogue state tracking dataset](https://yale-lily.github.io/cosql) and the 7,000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves both, CoSQL's zero-shot text-to-SQL dialogue state tracking task and Spider's zero-shot text-to-SQL translation task. Zero-shot means that the model can generalize to unseen SQL databases. ### Training Objective This model was initialized with [T5-3B](https://huggingface.co/t5-3b) and fine-tuned with the text-to-text generation objective. A question is always grounded in both, a database schema and the preceiding questions in the dialogue. The model is trained to predict the SQL query that would be used to answer the user's current natural language question. The input to the model is composed of the user's current question, the database identifier, a list of tables and their columns, and a sequence of previous questions in reverse chronological order. ``` [current question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... || [previous question] | ... | [first question] ``` The sequence of previous questions is separated by `||` from the linearized schema. In the absence of previous questions (for example, for the first question in a dialogue or for Spider questions), this separator is omitted. The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's current question in the dialog. ``` [db_id] | [sql] ``` ### Performance Out of the box, this model achieves 53.8 % question match accuracy and 21.8 % interaction match accuracy on the CoSQL development set. On the CoSQL test set, the model achieves 51.4 % question match accuracy and 21.7 % interaction match accuracy. Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **56.9 %** question match accuracy and **24.2 %** interaction match accuracy on the CoSQL development set. On the CoSQL test set and with PICARD, the model achieves **54.6 %** question match accuracy and **23.7 %** interaction match accuracy. ### Usage Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model. ### References 1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) 2. [Official PICARD code](https://github.com/ElementAI/picard) ### Citation ```bibtex @inproceedings{Scholak2021:PICARD, author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau}, title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.779", pages = "9895--9901", } ```
tscholak/1wnr382e
tscholak
2022-01-10T21:50:25Z
77
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "text2sql", "en", "dataset:spider", "arxiv:2109.05093", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab" tags: - text2sql widget: - "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id" license: "apache-2.0" datasets: - spider metrics: - spider --- ## tscholak/1wnr382e Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [T5-Large](https://huggingface.co/t5-large). ### Training Data The model has been fine-tuned on the 7000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves Spider's zero-shot text-to-SQL translation task, and that means that it can generalize to unseen SQL databases. ### Training Objective This model was initialized with [T5-Large](https://huggingface.co/t5-large) and fine-tuned with the text-to-text generation objective. Questions are always grounded in a database schema, and the model is trained to predict the SQL query that would be used to answer the question. The input to the model is composed of the user's natural language question, the database identifier, and a list of tables and their columns: ``` [question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... ``` The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's question: ``` [db_id] | [sql] ``` ### Performance Out of the box, this model achieves 65.3 % exact-set match accuracy and 67.2 % execution accuracy on the Spider development set. Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **69.1 %** exact-set match accuracy and **72.9 %** execution accuracy on the Spider development set. ### Usage Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model. ### References 1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) 2. [Official PICARD code](https://github.com/ElementAI/picard) ### Citation ```bibtex @inproceedings{Scholak2021:PICARD, author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau}, title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.779", pages = "9895--9901", } ```
SaulLu/markuplm-base
SaulLu
2022-01-10T19:17:34Z
9
0
transformers
[ "transformers", "pytorch", "markuplm", "arxiv:2110.08518", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
# MarkupLM **Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)** ## Introduction MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper: [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
ibm-research/tslm-discourse-markers
ibm-research
2022-01-10T14:42:41Z
0
0
null
[ "arxiv:2201.02026", "region:us" ]
null
2022-03-02T23:29:05Z
SenDM model described at https://arxiv.org/pdf/2201.02026 --- language: - en tags: - discourse-markers license: apache-2.0 ---
huggingtweets/dril-hostagekiller-suicidepussy
huggingtweets
2022-01-10T10:25:29Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dril-hostagekiller-suicidepussy/1641810324627/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322637724470358022/ccOsLDPE_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">HUSSY2K. & wint & I have 400 diseases</div> <div style="text-align: center; font-size: 14px;">@dril-hostagekiller-suicidepussy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from HUSSY2K. & wint & I have 400 diseases. | Data | HUSSY2K. | wint | I have 400 diseases | | --- | --- | --- | --- | | Tweets downloaded | 3186 | 3226 | 3237 | | Retweets | 819 | 480 | 121 | | Short tweets | 395 | 304 | 1125 | | Tweets kept | 1972 | 2442 | 1991 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bqo2ddu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-hostagekiller-suicidepussy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/o4ya0wuw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/o4ya0wuw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-hostagekiller-suicidepussy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
doc2query/msmarco-t5-base-v1
doc2query
2022-01-10T10:22:10Z
1,411
5
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:sentence-transformers/embedding-training-data", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - sentence-transformers/embedding-training-data widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/msmarco-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/msmarco-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
doc2query/msmarco-t5-small-v1
doc2query
2022-01-10T10:19:24Z
12
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:sentence-transformers/embedding-training-data", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - sentence-transformers/embedding-training-data widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/msmarco-t5-small-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/msmarco-t5-small-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
huggingtweets/hostagekiller
huggingtweets
2022-01-10T10:05:54Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hostagekiller/1641809138009/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">HUSSY2K.</div> <div style="text-align: center; font-size: 14px;">@hostagekiller</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from HUSSY2K.. | Data | HUSSY2K. | | --- | --- | | Tweets downloaded | 3186 | | Retweets | 819 | | Short tweets | 395 | | Tweets kept | 1972 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/u2hpg02v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hostagekiller's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3tx11pqs) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3tx11pqs/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hostagekiller') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lianaling/title-generator-t5
lianaling
2022-01-10T06:51:36Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
## Title Generator References this [notebook](https://shivanandroy.com/transformers-generating-arxiv-papers-title-from-abstracts/) Using `t5-small`, trained on a batch size of 16 for 4 epochs, utilising the ArXiV dataset through the `SimpleTransformers` library. Around 15k data was used for training and 3.7k data for evaluation. This is a `.pkl` file. ### Prerequisites Install `simpletransformers` library. ```bsh pip install simpletransformers ``` ### Example Usage ```py import pickle model = pickle.load(open("title-generator-t5-arxiv-16-4.pkl", "rb")) # Prefix your text with 'summarize: ' text = ["summarize: " + """Venetian commodes imitated the curving lines and carved ornament of the French rocaille, but with a particular Venetian variation; the pieces were painted, often with landscapes or flowers or scenes from Guardi or other painters, or Chinoiserie, against a blue or green background, matching the colours of the Venetian school of painters whose work decorated the salons. 24] Ceiling of church of Santi Giovanni e Paolo in Venice, by Piazzetta (1727) Juno and Luna by Giovanni Battista Tiepolo (1735–45) Murano glass chandelier at the Ca Rezzonico (1758) Ballroom ceiling of the Ca Rezzonico with ceiling by Giovanni Battista Crosato (1753) In church construction, especially in the southern German-Austrian region, gigantic spatial creations are sometimes created for practical reasons alone, which, however, do not appear monumental, but are characterized by a unique fusion of architecture, painting, stucco, etc. ,."""] print("Generated title: " + model.predict(text)) ```
cook/cicero-similis
cook
2022-01-10T06:07:57Z
7
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "language model", "la", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - la tags: - language model license: apache-2.0 datasets: - Tesserae - Phi5 - Thomas Aquinas - Patrologia Latina --- # Cicero-Similis ## Model description A Latin Language Model, trained on Latin texts, and evaluated using the corpus of Cicero, as described in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ## Intended uses & limitations #### How to use Normalize text using JV Replacement and tokenize using CLTK to separate enclitics such as "-que", then: ``` from transformers import BertForMaskedLM, AutoTokenizer, FillMaskPipeline tokenizer = AutoTokenizer.from_pretrained("cook/cicero-similis") model = BertForMaskedLM.from_pretrained("cook/cicero-similis") fill_mask = FillMaskPipeline(model=model, tokenizer=tokenizer, top_k=10_000) # Cicero, De Re Publica, VI, 32, 2 # "animal" is found in A, Q, PhD manuscripts # 'anima' H^1 Macr. et codd. Tusc. results = fill_mask("inanimum est enim omne quod pulsu agitatur externo; quod autem est [MASK],") ``` #### Limitations and bias Currently the model training data excludes modern and 19th century texts, but that weakness is the model's strength; it's not aimed to be a one-size-fits-all model. ## Training data Trained on the corpora Phi5, Tesserae, Thomas Aquinas, and Patrologes Latina. ## Training procedure 5 epochs, masked language modeling .15, effective batch size 32 ## Eval results A novel evaluation metric is proposed in the paper _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2. ### BibTeX entry and citation info TODO _What Would Cicero Write? -- Examining Critical Textual Decisions with a Language Model_ by Todd Cook, Published in Ciceroniana On Line, Vol. V, #2.
huggingtweets/marylandmudflap-sniping_soup
huggingtweets
2022-01-10T00:52:48Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1412400542794539011/cnUXEkge_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/645703196602601472/2A41g0gW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Soup & SCOTTY</div> <div style="text-align: center; font-size: 14px;">@marylandmudflap-sniping_soup</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Soup & SCOTTY. | Data | Soup | SCOTTY | | --- | --- | --- | | Tweets downloaded | 3237 | 3245 | | Retweets | 106 | 146 | | Short tweets | 1287 | 327 | | Tweets kept | 1844 | 2772 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/u88yo4gm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marylandmudflap-sniping_soup's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3dpmqtze) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3dpmqtze/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/marylandmudflap-sniping_soup') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ai-forever/ruclip-vit-base-patch32-384
ai-forever
2022-01-10T00:21:50Z
3,104
3
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ruclip-vit-base-patch32-384 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `150M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `512` * Transformer Heads: `8` * Image Size: `384` * Vision Layers: `12` * Vision Width: `768` * Vision Patch Size: `32` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-base-patch32-384", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:----------------------------| | Food101 | acc | 0.642 | | CIFAR10 | acc | 0.862 | | CIFAR100 | acc | 0.529 | | Birdsnap | acc | 0.161 | | SUN397 | acc | 0.510 | | Stanford Cars | acc | 0.572 | | DTD | acc | 0.390 | | MNIST | acc | 0.404 | | STL10 | acc | 0.946 | | PCam | acc | 0.506 | | CLEVR | acc | 0.188 | | Rendered SST2 | acc | 0.508 | | ImageNet | acc | 0.451 | | FGVC Aircraft | mean-per-class | 0.053 | | Oxford Pets | mean-per-class | 0.587 | | Caltech101 | mean-per-class | 0.834 | | Flowers102 | mean-per-class | 0.449 | | HatefulMemes | roc-auc | 0.537 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
ai-forever/ruclip-vit-large-patch14-336
ai-forever
2022-01-09T22:25:33Z
834
2
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ruclip-vit-large-patch14-336 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `430M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `768` * Transformer Heads: `12` * Image Size: `336` * Vision Layers: `24` * Vision Width: `1024` * Vision Patch Size: `14` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-large-patch14-336", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:--------------------| | Food101 | acc | 0.712 | | CIFAR10 | acc | 0.906 | | CIFAR100 | acc | 0.591 | | Birdsnap | acc | 0.213 | | SUN397 | acc | 0.523 | | Stanford Cars | acc | 0.659 | | DTD | acc | 0.408 | | MNIST | acc | 0.242 | | STL10 | acc | 0.956 | | PCam | acc | 0.554 | | CLEVR | acc | 0.142 | | Rendered SST2 | acc | 0.539 | | ImageNet | acc | 0.488 | | FGVC Aircraft | mean-per-class | 0.075 | | Oxford Pets | mean-per-class | 0.546 | | Caltech101 | mean-per-class | 0.835 | | Flowers102 | mean-per-class | 0.517 | | HatefulMemes | roc-auc | 0.519 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
ai-forever/ruclip-vit-base-patch32-224
ai-forever
2022-01-09T21:34:27Z
76
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ruclip-vit-base-patch32-224 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `150M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `512` * Transformer Heads: `8` * Image Size: `224` * Vision Layers: `12` * Vision Width: `768` * Vision Patch Size: `32` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-base-patch32-224", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:--------------------| | Food101 | acc | 0.505 | | CIFAR10 | acc | 0.818 | | CIFAR100 | acc | 0.504 | | Birdsnap | acc | 0.115 | | SUN397 | acc | 0.452 | | Stanford Cars | acc | 0.433 | | DTD | acc | 0.380 | | MNIST | acc | 0.447 | | STL10 | acc | 0.932 | | PCam | acc | 0.501 | | CLEVR | acc | 0.148 | | Rendered SST2 | acc | 0.489 | | ImageNet | acc | 0.375 | | FGVC Aircraft | mean-per-class | 0.033 | | Oxford Pets | mean-per-class | 0.560 | | Caltech101 | mean-per-class | 0.786 | | Flowers102 | mean-per-class | 0.401 | | HatefulMemes | roc-auc | 0.564 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
ai-forever/ruclip-vit-base-patch16-224
ai-forever
2022-01-09T21:34:11Z
14
1
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# ruclip-vit-base-patch16-224 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `150M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `512` * Transformer Heads: `8` * Image Size: `224` * Vision Layers: `12` * Vision Width: `768` * Vision Patch Size: `16` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-base-patch16-224", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:--------------------| | Food101 | acc | 0.552 | | CIFAR10 | acc | 0.810 | | CIFAR100 | acc | 0.496 | | Birdsnap | acc | 0.117 | | SUN397 | acc | 0.462 | | Stanford Cars | acc | 0.487 | | DTD | acc | 0.401 | | MNIST | acc | 0.464 | | STL10 | acc | 0.932 | | PCam | acc | 0.505 | | CLEVR | acc | 0.128 | | Rendered SST2 | acc | 0.527 | | ImageNet | acc | 0.401 | | FGVC Aircraft | mean-per-class | 0.043 | | Oxford Pets | mean-per-class | 0.595 | | Caltech101 | mean-per-class | 0.775 | | Flowers102 | mean-per-class | 0.388 | | HatefulMemes | roc-auc | 0.516 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
nepp1d0/SMILES_tokenizer
nepp1d0
2022-01-09T20:25:30Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
Tokenizer trained on BindingDB SMILES encodings. Trained on 1008081 samples with one blank space after each character in the SMILES string
huggingtweets/elxokas-evilafm-ibaillanos
huggingtweets
2022-01-09T19:38:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/elxokas-evilafm-ibaillanos/1641757124234/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1476303212672131074/kuPm3Cvp_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1473427376696705024/mzWRw3ML_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1402480040877699075/LShUbbef_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ibai & Alexelcapo & XOKAS</div> <div style="text-align: center; font-size: 14px;">@elxokas-evilafm-ibaillanos</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ibai & Alexelcapo & XOKAS. | Data | Ibai | Alexelcapo | XOKAS | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3207 | 3245 | | Retweets | 28 | 12 | 187 | | Short tweets | 669 | 231 | 421 | | Tweets kept | 2553 | 2964 | 2637 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ed2k4vcn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elxokas-evilafm-ibaillanos's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/169fwvwo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/169fwvwo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elxokas-evilafm-ibaillanos') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ibaillanos
huggingtweets
2022-01-09T18:36:11Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/ibaillanos/1641753367000/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1476303212672131074/kuPm3Cvp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ibai</div> <div style="text-align: center; font-size: 14px;">@ibaillanos</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ibai. | Data | Ibai | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 28 | | Short tweets | 669 | | Tweets kept | 2553 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyv6lsf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ibaillanos's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cxnkmkg6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cxnkmkg6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ibaillanos') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ydshieh/bert2bert-cnn_dailymail-fp16
ydshieh
2022-01-09T14:03:34Z
7
2
transformers
[ "transformers", "tf", "encoder-decoder", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Bert2Bert Summarization with 🤗 EncoderDecoder Framework [This is a TensorFlow version converted from the original PyTorch [Bert2Bert](https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16)] This model is a Bert2Bert model fine-tuned on summarization. Bert2Bert is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `bert-base-uncased` BERT models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python bert2bert = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") ``` The decoder of an `TFEncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``bert2bert`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `bert2bert-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import AutoTokenizer, TFEncoderDecoderModel loc = "ydshieh/bert2bert-cnn_dailymail-fp16" model = TFEncoderDecoderModel.from_pretrained(loc) tokenizer = AutoTokenizer.from_pretrained(loc) article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents.""" input_ids = tokenizer(article, return_tensors="tf").input_ids output_ids = model.generate(input_ids) summary = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(summary) # should produce # sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent mon ths. ``` ## Training script: For the original PyTorch BERT2BERT model, please follow this tutorial to see how to warm-start a BERT2BERT model: https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 16.12 | 17.07 | **16.1** |
ying-tina/wav2vec2-base-timit-demo-colab-32-epochs30
ying-tina
2022-01-09T09:21:52Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab-32-epochs30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab-32-epochs30 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4615 - Wer: 0.3434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5243 | 4.0 | 500 | 1.4532 | 0.9540 | | 0.6178 | 8.0 | 1000 | 0.5490 | 0.4627 | | 0.223 | 12.0 | 1500 | 0.4513 | 0.3881 | | 0.1299 | 16.0 | 2000 | 0.4573 | 0.3698 | | 0.0875 | 20.0 | 2500 | 0.4950 | 0.3637 | | 0.0613 | 24.0 | 3000 | 0.4327 | 0.3479 | | 0.0478 | 28.0 | 3500 | 0.4615 | 0.3434 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
NahedAbdelgaber/evaluating-student-writing-distibert-ner-with-metric
NahedAbdelgaber
2022-01-09T06:45:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: evaluating-student-writing-distibert-ner-with-metric results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # evaluating-student-writing-distibert-ner-with-metric This model is a fine-tuned version of [NahedAbdelgaber/evaluating-student-writing-distibert-ner](https://huggingface.co/NahedAbdelgaber/evaluating-student-writing-distibert-ner) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7535 - Precision: 0.0614 - Recall: 0.2590 - F1: 0.0993 - Accuracy: 0.6188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7145 | 1.0 | 1755 | 0.7683 | 0.0546 | 0.2194 | 0.0875 | 0.6191 | | 0.6608 | 2.0 | 3510 | 0.7504 | 0.0570 | 0.2583 | 0.0934 | 0.6136 | | 0.5912 | 3.0 | 5265 | 0.7535 | 0.0614 | 0.2590 | 0.0993 | 0.6188 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
RenZHU/t5-small-finetuned-xsum-original
RenZHU
2022-01-09T06:04:38Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum-original results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 28.8838 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-original This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4436 - Rouge1: 28.8838 - Rouge2: 8.1114 - Rougel: 22.8318 - Rougelsum: 22.8318 - Gen Len: 18.8141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6754 | 1.0 | 51012 | 2.4436 | 28.8838 | 8.1114 | 22.8318 | 22.8318 | 18.8141 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt
vuiseng9
2022-01-09T03:14:14Z
31
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. magnitude sparsification at 60% upon initialization. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted. 2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers. 3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.3122 eval_f1 = 87.6162 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5 pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt BASE_MODEL=/path/to/cloned_repo_above #to-revise wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt/raw/main/nncf_bert_squad_sparsity.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --optimize_model_before_eval \ --optimized_checkpoint $BASE_MODEL \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --dataset_name squad \ --optimize_model_before_eval \ --qat_checkpoint $MODELROOT/checkpoint-22000 \ --nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \ --to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-60.0sparse-qat-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
vuiseng9
2022-01-09T03:11:21Z
6
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "arxiv:2109.04838", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is a downstream fine-tuning of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid). "filled" means unstructured fine-grained sparsified parameters are allowed to learn during fine-tuning. "lt" means distillation of larger model as teacher, i.e. ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.3311 eval_f1 = 87.69 eval_samples = 10784 ``` This model is a replication of [block pruning paper](https://arxiv.org/abs/2109.04838) with its open-sourced codebase (forked and modified). To reproduce this model, pls follow [documentation here](https://github.com/vuiseng9/nn_pruning/blob/reproduce-evaluation/reproduce-eval/readme.md) until step 3. # Eval The model cannot be evaluated with HF QA example out-of-the-box as the final dimension of the model architecture has been realized. Follow the custom setup below. ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5 pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} ``` This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt ``` Add ```--optimize_model_before_eval``` and ```--optimized_checkpoint /path/to/clone``` during evaluation. ```bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-cropped WORKDIR=transformers/examples/pytorch/question-answering cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --dataset_name squad \ --optimize_model_before_eval \ --optimized_checkpoint /path/to/clone/bert-base-squadv1-block-pruning-hybrid-filled-lt \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
RenZHU/t5-small-finetuned-xsum
RenZHU
2022-01-09T03:09:55Z
106
1
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5310 - Rouge1: 27.9232 - Rouge2: 7.5324 - Rougel: 22.035 - Rougelsum: 22.0304 - Gen Len: 18.8116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.7564 | 1.0 | 51012 | 2.5310 | 27.9232 | 7.5324 | 22.035 | 22.0304 | 18.8116 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
hf-test/xls-r-ab-test
hf-test
2022-01-09T00:36:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8787 - Wer: 1.3460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
LanceaKing/spkrec-ecapa-cnceleb
LanceaKing
2022-01-08T09:27:18Z
12
4
speechbrain
[ "speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN", "zh", "dataset:cnceleb", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: "zh" thumbnail: tags: - speechbrain - embeddings - Speaker - Verification - Identification - pytorch - ECAPA - TDNN license: "apache-2.0" datasets: - cnceleb metrics: - EER --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Speaker Verification with ECAPA-TDNN embeddings on cnceleb This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. The system can be used to extract speaker embeddings as well. It is trained on cnceleb 1+ cnceleb2 training data. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance on cnceleb1-test set(Cleaned) is: | Release | EER(%) | minDCF | |:-------------:|:--------------:|:--------------:| ## Pipeline description This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Compute your speaker embeddings ```python import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb") signal, fs =torchaudio.load('samples/audio_samples/example1.wav') embeddings = classifier.encode_batch(signal) ``` The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*. ### Perform Speaker Verification ```python from speechbrain.pretrained import SpeakerRecognition verification = SpeakerRecognition.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb", savedir="pretrained_models/spkrec-ecapa-cnceleb") score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-cnceleb/example1.wav", "speechbrain/spkrec-ecapa-cnceleb/example2.flac") ``` The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (aa018540). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/LanceaKing/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/CNCeleb/SpeakerRec python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing ECAPA-TDNN ``` @inproceedings{DBLP:conf/interspeech/DesplanquesTD20, author = {Brecht Desplanques and Jenthe Thienpondt and Kris Demuynck}, editor = {Helen Meng and Bo Xu and Thomas Fang Zheng}, title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation in {TDNN} Based Speaker Verification}, booktitle = {Interspeech 2020}, pages = {3830--3834}, publisher = {{ISCA}}, year = {2020}, } ``` # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Fran莽ois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
raynardj/wenyanwen-ancient-translate-to-modern
raynardj
2022-01-08T04:22:30Z
162
32
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "translation", "古文", "文言文", "ancient", "classical", "zh", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - zh - zh tags: - translation - 古文 - 文言文 - ancient - classical widget: - text: "此诚危急存亡之秋也" --- # From Classical(ancient) Chinese to Modern Chinese > This model translate Classical(ancient) Chinese to Modern Chinese, so I guess who's interested in the problemset can speak at least modern Chinese, hence... let me continue the documentation in Chinese # 文言文(古文)到现代文的翻译器 > 这个模型已有做成应用, [【随无涯】](https://huggingface.co/spaces/raynardj/duguwen-classical-chinese-to-morden-translate)是一个huggingface spaces + streamlit 的古文阅读应用(含海量书籍), 可以在阅读时翻译 > 输入文言文, 可以是断句 或者 未断句的文言文, 模型会预测现代文的表述。 其他模型: * 从[现代文翻译到文言文](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) > 从文言文到现代文的翻译器, 欢迎前往[我的github文言诗词项目页面探讨、加⭐️ ](https://github.com/raynardj/yuan) > 训练语料是就是九十多万句句对, [数据集链接📚](https://github.com/BangBOOM/Classical-Chinese)。 训练时source序列(古文序列), 按照50%的概率整句去除所有标点符号。 ## 推荐的inference 通道 **注意** * 你必须将```generate```函数的```eos_token_id```设置为102就可以翻译出完整的语句, 不然翻译完了会有残留的语句(因为做熵的时候用pad标签=-100导致)。 目前huggingface 页面上compute按钮会有这个问题, 推荐使用以下代码来得到翻译结果 * 请设置```generate```的参数```num_beams>=3```, 以达到较好的翻译效果 * 请设置```generate```的参数```max_length```256, 不然结果会吃掉句子 ```python from transformers import ( EncoderDecoderModel, AutoTokenizer ) PRETRAINED = "raynardj/wenyanwen-ancient-translate-to-modern" tokenizer = AutoTokenizer.from_pretrained(PRETRAINED) model = EncoderDecoderModel.from_pretrained(PRETRAINED) def inference(text): tk_kwargs = dict( truncation=True, max_length=128, padding="max_length", return_tensors='pt') inputs = tokenizer([text,],**tk_kwargs) with torch.no_grad(): return tokenizer.batch_decode( model.generate( inputs.input_ids, attention_mask=inputs.attention_mask, num_beams=3, max_length=256, bos_token_id=101, eos_token_id=tokenizer.sep_token_id, pad_token_id=tokenizer.pad_token_id, ), skip_special_tokens=True) ``` ## 目前版本的案例 > 当然, 拿比较熟知的语句过来, 通常会有些贻笑大方的失误, 大家如果有好玩的调戏案例, 也欢迎反馈 ```python >>> inference('非我族类其心必异') ['不 是 我 们 的 族 类 , 他 们 的 心 思 必 然 不 同 。'] >>> inference('肉食者鄙未能远谋') ['吃 肉 的 人 鄙 陋 , 不 能 长 远 谋 划 。'] # 这里我好几批模型都翻不出这个**输**字(甚至有一个版本翻成了秦始皇和汉武帝), 可能并不是很古朴的用法, >>> inference('江山如此多娇引无数英雄竞折腰惜秦皇汉武略输文采唐宗宋祖稍逊风骚') ['江 山 如 此 多 , 招 引 无 数 的 英 雄 , 竞 相 折 腰 , 可 惜 秦 皇 、 汉 武 , 略 微 有 文 采 , 唐 宗 、 宋 祖 稍 稍 逊 出 风 雅 。'] >>> inference("清风徐来水波不兴") ['清 风 慢 慢 吹 来 , 水 波 不 兴 。'] >>> inference("无他唯手熟尔") ['没 有 别 的 事 , 只 是 手 熟 罢 了 。'] >>> inference("此诚危急存亡之秋也") ['这 实 在 是 危 急 存 亡 的 时 候 。'] ``` ## 其他文言诗词的资源 * [项目源代码 🌟, 欢迎+star提pr](https://github.com/raynardj/yuan) * [跨语种搜索 🔎](https://huggingface.co/raynardj/xlsearch-cross-lang-search-zh-vs-classicical-cn) * [现代文翻译古汉语的模型 ⛰](https://huggingface.co/raynardj/wenyanwen-chinese-translate-to-ancient) * [古汉语到现代文的翻译模型, 输入可以是未断句的句子 🚀](https://huggingface.co/raynardj/wenyanwen-ancient-translate-to-modern) * [断句模型 🗡](https://huggingface.co/raynardj/classical-chinese-punctuation-guwen-biaodian) * [意境关键词 和 藏头写诗🤖](https://huggingface.co/raynardj/keywords-cangtou-chinese-poetry)
gborn/autonlp-news-summarization-483413089
gborn
2022-01-07T23:10:47Z
106
1
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:gborn/autonlp-data-news-summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - gborn/autonlp-data-news-summarization co2_eq_emissions: 210.6348731063569 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 483413089 - CO2 Emissions (in grams): 210.6348731063569 ## Validation Metrics - Loss: 1.8478657007217407 - Rouge1: 50.5981 - Rouge2: 26.2167 - RougeL: 46.0513 - RougeLsum: 46.061 - Gen Len: 13.5987 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/gborn/autonlp-news-summarization-483413089 ```
huggingtweets/melspurgatory
huggingtweets
2022-01-07T16:32:41Z
106
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/melspurgatory/1641573097526/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1435429688831135746/t5TELThj_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">matthew</div> <div style="text-align: center; font-size: 14px;">@melspurgatory</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from matthew. | Data | matthew | | --- | --- | | Tweets downloaded | 3220 | | Retweets | 429 | | Short tweets | 541 | | Tweets kept | 2250 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29yvc0bm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @melspurgatory's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w9infsn0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w9infsn0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/melspurgatory') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lincoln/2021twitchfr-conv-bert-small
lincoln
2022-01-07T15:25:20Z
6
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "convbert", "feature-extraction", "twitch", "fr", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - fr license: mit pipeline_tag: "feature-extraction" widget: - text: LUL +1 xD La Fronce ! tags: - feature-extraction - convbert - twitch --- ## Modèle de langue sur les données Twitch FR L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...). Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul. Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant. Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus. La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur. Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100. _Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._ ## Données | Streamer | Nbr de messages | Categories notables en 2021 | | --------------------------------------------- | --------------- | ---------------------------------- | | Ponce | 2 604 935 | Chatting/Mario Kart/FIFA | | Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 | | Mistermv | 1 205 882 | Isaac/Special events/TFT | | Zerator | 900 894 | New World/WOW/Valorant | | Blitzstream | 821 585 | Chess | | Squeezie | 602 148 | Chatting / Minecraft | | Antoinedaniellive | 548 497 | Geoguessr | | Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events | | Samueletienne | 215 956 | chatting | Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération Les données d'entrainement sont basé sur le format d'entrainement du modèle ELECTRA. Cela nécessite de formater les données en paragraphe, séparés par phrase. Nous avons choisi de regrouper les messages dans une fenêtre de 60 secondes, faisant office de paragraphe, avec les conditions suivantes : * Longueur supérieure à 170 (ce qui représente en moyenne 50 tokens) afin de ne pas créer des instances ayant pas d’information car majoritairement vide : un padding sera nécessaire et pénalise la vitesse d’apprentissage. * 128 tokens maximums (défaut) Si la longueur maximale est atteinte, une deuxième instance est créée. Au final, la volumétrie d'instance d'entrainement est de 554 974. ## Application Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats. ## Remarques * Expérimentation ponctuelle * Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_ * Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h. ## Usage ```python from transformers import AutoTokenizer, ConvBertModel from transformers import FeatureExtractionPipeline model_name = 'lincoln/2021twitchfr-conv-bert-small' loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) loaded_model = ConvBertModel.from_pretrained(model_name) nlp = FeatureExtractionPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp("<3 <3 les modos") ``` ## Modèles: * [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small) * [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm) * [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
lincoln/2021twitchfr-conv-bert-small-mlm
lincoln
2022-01-07T15:23:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "convbert", "fill-mask", "twitch", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - fr license: mit pipeline_tag: "fill-mask" widget: - text: <mask> tt le monde ! - text: cc<mask> va? - text: <mask> la Fronce ! tags: - fill-mask - convbert - twitch --- ## Modèle de Masking sur les données Twitch FR L'expérimentation menée au sein de Lincoln avait pour principal objectif de mettre en œuvre des techniques NLP from scratch sur un corpus de messages issus d’un chat Twitch. Ces derniers sont exprimés en français, mais sur une plateforme internet avec le vocabulaire internet que cela implique (fautes, vocabulaire communautaires, abréviations, anglicisme, emotes, ...). Nos contraintes sont celles d’une entreprise n’ayant pas une volumétrie excessive de données et une puissance infinie de calcul. Il a été nécessaire de construire un nouveau tokenizer afin de mieux correspondre à notre corpus plutôt qu’un tokenizer français existant. Note corpus étant faible en volumétrie par rapport aux données habituelles pour entrainer un modèle BERT, nous avons opté pour l’entrainement d’un modèle dit « small ». Et il a été montré dans la littérature qu’un corpus de quelques giga octets peut donner de bons résultats, c’est pourquoi nous avons continué avec notre corpus. La limite de la puissance de calcul a été contourné à l’aide d’une nouvelle architecture d’apprentissage basée sur un double modèle générateur / discriminateur. Ceci nous a permis d’entrainer un modèle de langue ConvBERT sur nos données, ainsi qu’un modèle de masking en quelques heures sur une carte GPU V100. _Nous garantissons pas la stabilité du modèle sur le long terme. Modèle réalisé dans le cadre d'un POC._ ## Données | Streamer | Nbr de messages | Categories notables en 2021 | | --------------------------------------------- | --------------- | ---------------------------------- | | Ponce | 2 604 935 | Chatting/Mario Kart/FIFA | | Domingo | 1 209 703 | Chatting/talk-shows/FM2O21 | | Mistermv | 1 205 882 | Isaac/Special events/TFT | | Zerator | 900 894 | New World/WOW/Valorant | | Blitzstream | 821 585 | Chess | | Squeezie | 602 148 | Chatting / Minecraft | | Antoinedaniellive | 548 497 | Geoguessr | | Jeanmassietaccropolis/jeanmassiet | 301 387 | Talk-shows/chatting/special events | | Samueletienne | 215 956 | chatting | Sur la période du 12/03/2021 au 22/07/2021. La totalité des messages comptent 9 410 987 messages sur ces neufs streamers. Ces messages sont issus du canal IRC, donc n’ont pas subi de modération Les données d'entrainement du modèle de masking contient 899 652 instances de train et 99 962 instances de test. Les données ont été formaté en concaténant les messages sur une fenêtre de 10s. Cette fenêtre correspond à une fenêtre courte qui regroupe des messages très « proches » temporellement. * 512 tokens max * Probabilité du « mask » : 15% ## Application Voir github public [lincoln/twitchatds](https://github.com/Lincoln-France/twitchatds) pour les détails d'implémentation et les résultats. ## Remarques * Expérimentation ponctuelle * Les métriques d'entrainement sont disponibles dans l'onglet _Training metrics_ * Pour une meilleure stabilité, les données doivent être plus hétérogènes et volumineuse. Le modèle doit être entrainé + de 24h. * Le token `<mask>` fonctionne probablement mieux sans laisser d'espace à gauche. Cela est dû au fait que `lstrip=False` pour ce token spécial. ## Usage ```python from transformers import AutoTokenizer, ConvBertForMaskedLM from transformers import pipeline model_name = 'lincoln/2021twitchfr-conv-bert-small-mlm' tokenizer_name = 'lincoln/2021twitchfr-conv-bert-small' loaded_tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) loaded_model = ConvBertForMaskedLM.from_pretrained(model_name) nlp = pipeline('fill-mask', model=loaded_model, tokenizer=loaded_tokenizer) nlp('<mask> les gens !') ``` ## Modèles: * [2021twitchfr-conv-bert-small](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small) * [2021twitchfr-conv-bert-small-mlm](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm) * [2021twitchfr-conv-bert-small-mlm-simcse](https://huggingface.co/lincoln/2021twitchfr-conv-bert-small-mlm-simcse)
Kien/distilbert-base-uncased-finetuned-cola
Kien
2022-01-07T15:00:42Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5232819075279987 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5327 - Matthews Correlation: 0.5233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5314 | 1.0 | 535 | 0.4955 | 0.4270 | | 0.3545 | 2.0 | 1070 | 0.5327 | 0.5233 | | 0.2418 | 3.0 | 1605 | 0.6180 | 0.5132 | | 0.1722 | 4.0 | 2140 | 0.7344 | 0.5158 | | 0.1243 | 5.0 | 2675 | 0.8581 | 0.5196 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
s87204/distilbert-base-uncased-finetuned-cola
s87204
2022-01-07T14:03:20Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5365264430934975 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8505 - Matthews Correlation: 0.5365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5201 | 1.0 | 535 | 0.5345 | 0.4153 | | 0.3469 | 2.0 | 1070 | 0.5033 | 0.5109 | | 0.2367 | 3.0 | 1605 | 0.6589 | 0.5209 | | 0.1705 | 4.0 | 2140 | 0.7778 | 0.5354 | | 0.125 | 5.0 | 2675 | 0.8505 | 0.5365 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ietz/distilroberta-base-finetuned-jira-qt-issue-title
ietz
2022-01-07T12:27:11Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "jira", "code", "issue", "development", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - jira - code - issue - development license: mit --- `distilroberta-base` finetuned for masked language modeling on 126213 Qt jira issue titles for up to 50 epochs.
doc2query/stackexchange-title-body-t5-small-v1
doc2query
2022-01-07T08:33:30Z
112
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - flax-sentence-embeddings/stackexchange_title_body_jsonl widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/stackexchange-title-body-t5-small-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/stackexchange-title-body-t5-small-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 321k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (title, question_body) from StackExchange.
jiobiala24/wav2vec2-base-checkpoint-2
jiobiala24
2022-01-07T06:08:49Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-TPU-cv-fine-tune-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-TPU-cv-fine-tune-2 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-TPU-cv-fine-tune](https://huggingface.co/jiobiala24/wav2vec2-base-TPU-cv-fine-tune) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.6051 - Wer: 0.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.522 | 6.45 | 400 | 1.2550 | 0.5649 | | 0.2874 | 12.9 | 800 | 1.4235 | 0.6054 | | 0.152 | 19.35 | 1200 | 1.5743 | 0.5806 | | 0.0857 | 25.8 | 1600 | 1.6051 | 0.5484 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingartists/obladaet
huggingartists
2022-01-07T01:09:32Z
6
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/obladaet", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/obladaet tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/4411ffc50a3cd07d303d09a5db3b7cf5.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">OBLADAET</div> <a href="https://genius.com/artists/obladaet"> <div style="text-align: center; font-size: 14px;">@obladaet</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from OBLADAET. Dataset is available [here](https://huggingface.co/datasets/huggingartists/obladaet). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/obladaet") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1mtsuuwr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on OBLADAET's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1s9epb35) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1s9epb35/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/obladaet') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/obladaet") model = AutoModelWithLMHead.from_pretrained("huggingartists/obladaet") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
Waynehillsdev/Waynehills-STT-doogie-server
Waynehillsdev
2022-01-06T17:18:49Z
87
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: name: Waynehills-STT-doogie-server --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Waynehills-STT-doogie-server This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie-server](https://huggingface.co/Doogie/Waynehills-STT-doogie-server) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
emillykkejensen/daT5-base
emillykkejensen
2022-01-06T11:14:19Z
114
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "da", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - da license: apache-2.0 --- ## daT5-base A smaller version of [Google's mt5-base](https://huggingface.co/google/mt5-base) model, where the original model is reduced to only include Danish embeddings. ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("emillykkejensen/daT5-base") model = AutoModel.from_pretrained("emillykkejensen/daT5-base") ``` ## Further reading [Gist](https://gist.github.com/emillykkejensen/8bf1b323495efc7252dee966e6bc1b5c) showing (in Danish) how the embeddings are extracted [Article](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) explaining how to do it by [David Dale](https://huggingface.co/cointegrated) ## Also check out [daT5-large](https://huggingface.co/emillykkejensen/daT5-large)
jiobiala24/wav2vec2-base-checkpoint-1
jiobiala24
2022-01-06T09:39:38Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-TPU-cv-fine-tune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-TPU-cv-fine-tune This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.6987 - Wer: 0.6019 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1017 | 8.88 | 400 | 1.4635 | 0.7084 | | 0.436 | 17.77 | 800 | 1.4765 | 0.6231 | | 0.1339 | 26.66 | 1200 | 1.6987 | 0.6019 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
sam890914/autonlp-roberta-large2-479012819
sam890914
2022-01-06T08:46:51Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:sam890914/autonlp-data-roberta-large2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - sam890914/autonlp-data-roberta-large2 co2_eq_emissions: 71.60954851696604 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 479012819 - CO2 Emissions (in grams): 71.60954851696604 ## Validation Metrics - Loss: 0.22774338722229004 - Accuracy: 0.9395126938149599 - Precision: 0.9677075940383251 - Recall: 0.9117352056168505 - AUC: 0.9862377263827619 - F1: 0.9388879325185058 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/sam890914/autonlp-roberta-large2-479012819 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("sam890914/autonlp-roberta-large2-479012819", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("sam890914/autonlp-roberta-large2-479012819", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Tahsin/distilbert-base-uncased-finetuned-emotion
Tahsin
2022-01-06T07:43:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9285 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1561 - Accuracy: 0.9285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 0.1635 | 0.9295 | | 0.111 | 2.0 | 500 | 0.1515 | 0.936 | | 0.111 | 3.0 | 750 | 0.1561 | 0.9285 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
saurkulsh/T0pp
saurkulsh
2022-01-06T05:48:32Z
17
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:bigscience/P3", "arxiv:2110.08207", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: "A is the son's of B's uncle. What is the family relationship between A and B?" - text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old." - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy." example_title: "Sentiment analysis" - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady. \nIn the previous sentence, decide who 'her' is referring to." example_title: "Coreference resolution" - text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n Select the category for the above sentence from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences 1 and 2 have the same meaning?" example_title: "Paraphrase identification" - text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1, LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out. Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n Sentence A: you can leave the books on the table over there.\n Sentence B: the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n Which book is the leftmost book?" example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: "Reading comprehension" - text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n Which of the following best characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places where people live." --- **How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"! # Model Description T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*. A few other examples that you can try: - *A is the son's of B's uncle. What is the family relationship between A and B?* - *Question A: How is air traffic controlled?<br> Question B: How do you become an air traffic controller?<br> Pick one: these questions are duplicates or not duplicates.* - *Is the word 'table' used in the same meaning in the two following sentences?<br><br> Sentence A: you can leave the books on the table over there.<br> Sentence B: the tables in this book are very hard to read.* - *Max: Know any good websites to buy clothes from?<br> Payton: Sure :) LINK 1, LINK 2, LINK 3<br> Max: That's a lot of them!<br> Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br> Max: I'll check them out. Thanks.<br><br> Who or what are Payton and Max referring to when they say 'them'?* - *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br> The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br> Which book is the leftmost book?* - *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.* # How to use We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[T0](https://huggingface.co/bigscience/T0)|11 billion| |[T0p](https://huggingface.co/bigscience/T0p)|11 billion| |[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion| |[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion| |[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion| |[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion| Here is how to use the model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp") inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`. # Training procedure T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective. At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section. Training details: - Fine-tuning steps: 12'200 - Input sequence length: 1024 - Target sequence length: 256 - Batch size: 1'024 sequences - Optimizer: Adafactor - Learning rate: 1e-3 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP| |T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions| |T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC| |T0_single_prompt|Same as T0 but only one prompt per training dataset| |T0_original_task_only|Same as T0 but only original tasks templates| |T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model| For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page. *: We recast Hotpot QA as closed-book QA due to long input sequence length. # Evaluation data We evaluate our models on a suite of held-out tasks: |Task category|Datasets| |-|-| |Natural language inference|ANLI, CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Limitations - The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html). - We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model. - Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text. # Bias and fairness Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist or biased: - Input: `Is the earth flat?` - Prediction: `yes` - Input: `Do vaccines cause autism?` - Prediction: `yes` - Input: `Complete this sentence: This man works as a` - Prediction: `Architect` - Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny` Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases. To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts. <table> <tr> <td>Dataset</td> <td>Model</td> <td>Average (Acc.)</td> <td>Median (Acc.)</td> </tr> <tr> <td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td> </tr> <td>T0p</td><td>57.6</td><td>83.8</td> <tr> </tr> <td>T0pp</td><td>62.7</td><td>64.4</td> <tr> </tr> <td>T0_single_prompt</td><td>57.6</td><td>69.5</td> <tr> </tr> <td>T0_original_task_only</td><td>47.1</td><td>37.8</td> <tr> </tr> <td>T0_3B</td><td>56.9</td><td>82.6</td> </tr> <tr> <td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td> </tr> <td>T0p</td><td>80.1</td><td>80.6</td> <tr> </tr> <td>T0pp</td><td>89.2</td><td>90.0</td> <tr> </tr> <td>T0_single_prompt</td><td>81.6</td><td>84.6</td> <tr> </tr> <td>T0_original_task_only</td><td>83.7</td><td>83.8</td> <tr> </tr> <td>T0_3B</td><td>69.7</td><td>69.4</td> </tr> </table> To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts. <table> <tr> <td rowspan="2">Model</td> <td rowspan="2">Subset</td> <td colspan="3">Average (Acc.)</td> <td colspan="3">Median (Acc.)</td> </tr> <tr> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> <td>Pro</td> <td>Anti</td> <td>Pro - Anti</td> </tr> <tr> <td rowspan="2">T0</td><td>Type 1</td> <td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td> </tr> <td>Type 2</td> <td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0p</td> <td>Type 1</td> <td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td> </tr> </tr> <td rowspan="2">T0pp</td> <td>Type 1</td> <td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td> </tr> </tr> <td>Type 2</td> <td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td> </tr> </tr> <td rowspan="2">T0_single_prompt</td> <td>Type 1</td> <td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td> </tr> </tr> <td>Type 2</td> <td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td> </tr> </tr> <td rowspan="2">T0_original_task_only</td> <td>Type 1</td> <td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td> </tr> </tr> <td> Type 2</td> <td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td> </tr> </tr> <td rowspan="2">T0_3B</td> <td>Type 1</td> <td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td> </tr> </tr> <td> Type 2</td> <td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td> </tr> </table> # BibTeX entry and citation info ```bibtex @misc{sanh2021multitask, title={Multitask Prompted Training Enables Zero-Shot Task Generalization}, author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush}, year={2021}, eprint={2110.08207}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
ValkyriaLenneth/longformer_zh
ValkyriaLenneth
2022-01-06T03:50:20Z
1,826
22
transformers
[ "transformers", "pytorch", "longformer", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# 中文预训练Longformer模型 | Longformer_ZH with PyTorch 相比于Transformer的O(n^2)复杂度,Longformer提供了一种以线性复杂度处理最长4K字符级别文档序列的方法。Longformer Attention包括了标准的自注意力与全局注意力机制,方便模型更好地学习超长序列的信息。 Compared with O(n^2) complexity for Transformer model, Longformer provides an efficient method for processing long-document level sequence in Linear complexity. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. 我们注意到关于中文Longformer或超长序列任务的资源较少,因此在此开源了我们预训练的中文Longformer模型参数, 并提供了相应的加载方法,以及预训练脚本。 There are not so much resource for Chinese Longformer or long-sequence-level chinese task. Thus we open source our pretrained longformer model to help the researchers. ## 加载模型 | Load the model 您可以使用谷歌云盘或百度网盘下载我们的模型 You could get Longformer_zh from Google Drive or Baidu Yun. - Google Drive: https://drive.google.com/file/d/1IDJ4aVTfSFUQLIqCYBtoRpnfbgHPoxB4/view?usp=sharing - 百度云: 链接:https://pan.baidu.com/s/1HaVDENx52I7ryPFpnQmq1w 提取码:y601 我们同样提供了Huggingface的自动下载 We also provide auto load with HuggingFace.Transformers. ``` from Longformer_zh import LongformerZhForMaksedLM LongformerZhForMaksedLM.from_pretrained('ValkyriaLenneth/longformer_zh') ``` ## 注意事项 | Notice - 直接使用 `transformers.LongformerModel.from_pretrained` 加载模型 - Please use `transformers.LongformerModel.from_pretrained` to load the model directly - 以下内容已经被弃用 - The following notices are abondoned, please ignore them. - 区别于英文原版Longformer, 中文Longformer的基础是Roberta_zh模型,其本质上属于 `Transformers.BertModel` 而非 `RobertaModel`, 因此无法使用原版代码直接加载。 - Different with origin English Longformer, Longformer_Zh is based on Roberta_zh which is a subclass of `Transformers.BertModel` not `RobertaModel`. Thus it is impossible to load it with origin code. - 我们提供了修改后的中文Longformer文件,您可以使用其加载参数。 - We provide modified Longformer_zh class, you can use it directly to load the model. - 如果您想将此参数用于更多任务,请参考`Longformer_zh.py`替换Attention Layer. - If you want to use our model on more down-stream tasks, please refer to `Longformer_zh.py` and replace Attention layer with Longformer Attention layer. ## 关于预训练 | About Pretraining - 我们的预训练语料来自 https://github.com/brightmart/nlp_chinese_corpus, 根据Longformer原文的设置,采用了多种语料混合的预训练数据。 - The corpus of pretraining is from https://github.com/brightmart/nlp_chinese_corpus. Based on the paper of Longformer, we use a mixture of 4 different chinese corpus for pretraining. - 我们的模型是基于Roberta_zh_mid (https://github.com/brightmart/roberta_zh),训练脚本参考了https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb - The basement of our model is Roberta_zh_mid (https://github.com/brightmart/roberta_zh). Pretraining scripts is modified from https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb. - 同时我们在原版基础上,引入了 `Whole-Word-Masking` 机制,以便更好地适应中文特性。 - We introduce `Whole-Word-Masking` method into pretraining for better fitting Chinese language. - `Whole-Word-Masking`代码改写自TensorFlow版本的Roberta_zh,据我们所知是第一个开源的Pytorch版本WWM. - Our WWM scripts is refacted from Roberta_zh_Tensorflow, as far as we know, it is the first open source Whole-word-masking scripts in Pytorch. - 模型 `max_seq_length = 4096`, 在 4 * Titan RTX 上预训练3K steps 大概用时4天。 - Max seuence length is 4096 and the pretraining took 4 days on 4 * Titan RTX. - 我们使用了 `Nvidia.Apex` 引入了混合精度训练,以加速预训练。 - We use `Nvidia.Apex` to accelerate pretraining. - 关于数据预处理, 我们采用 `Jieba` 分词与`JIONLP`进行数据清洗。 - We use `Jieba` Chinese tokenizer and `JIONLP` data cleaning. - 更多细节可以参考我们的预训练脚本 - For more details, please check our pretraining scripts. ## 效果测试 | Evaluation ### CCF Sentiment Analysis - 由于中文超长文本级别任务稀缺,我们采用了CCF-Sentiment-Analysis任务进行测试 - Since it is hard to acquire open-sourced long sequence level chinese NLP task, we use CCF-Sentiment-Analysis for evaluation. |Model|Dev F| |----|----| |Bert|80.3| |Bert-wwm-ext| 80.5| |Roberta-mid|80.5| |Roberta-large|81.25| |Longformer_SC|79.37| |Longformer_ZH|80.51| ### Pretraining BPC - 我们提供了预训练BPC(bits-per-character), BPC越小,代表语言模型性能更优。可视作PPL. - We also provide BPC scores of pretraining, the lower BPC score, the better performance Langugage Model has. You can also treat it as PPL. |Model|BPC| |---|---| |Longformer before training| 14.78| |Longformer after training| 3.10| ### CMRC(Chinese Machine Reading Comprehension) |Model|F1|EM| |---|---|---| |Bert|85.87|64.90| |Roberta|86.45|66.57| |Longformer_zh|86.15|66.84| ### Chinese Coreference Resolution |Model|Conll-F1|Precision|Recall| |---|---|---|---| |Bert|66.82|70.30|63.67| |Roberta|67.77|69.28|66.32| |Longformer_zh|67.81|70.13|65.64| ## 致谢 感谢东京工业大学 奥村·船越研究室 提供算力。 Thanks Okumula·Funakoshi Lab from Tokyo Institute of Technology who provides the devices and oppotunity for me to finish this project.
unicamp-dl/mt5-base-mmarco-v2
unicamp-dl
2022-01-05T23:21:26Z
305
3
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "msmarco", "t5", "tensorflow", "pt", "pt-br", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - t5 - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # mt5-base Reranker finetuned on mMARCO ## Introduction mt5-base-mmarco-v2 is a mT5-based model fine-tuned on a multilingual translated version of MS MARCO passage dataset. This dataset, named Multi MS MARCO, is formed by 9 complete MS MARCO passages collection in 9 different languages. In the v2 version, the datasets were translated using Google Translate. Further information about the dataset or the translation method can be found on our paper [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import T5Tokenizer, MT5ForConditionalGeneration model_name = 'unicamp-dl/mt5-base-mmarco-v2' tokenizer = T5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) ``` # Citation If you use mt5-base-mmarco-v2, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
unicamp-dl/mMiniLM-L6-v2-pt-v2
unicamp-dl
2022-01-05T22:59:11Z
7
3
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "msmarco", "miniLM", "tensorflow", "pt", "pt-br", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - miniLM - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # mMiniLM-L6-v2 Reranker finetuned on mMARCO ## Introduction mMiniLM-L6-v2-pt-msmarco-v2 is a multilingual miniLM-based model finetuned on a Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import AutoTokenizer, AutoModel model_name = 'unicamp-dl/mMiniLM-L6-v2-pt-msmarco-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Citation If you use mMiniLM-L6-v2-pt-msmarco-v2, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
unicamp-dl/mMiniLM-L6-v2-mmarco-v2
unicamp-dl
2022-01-05T22:45:15Z
223
6
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "msmarco", "miniLM", "tensorflow", "pt", "pt-br", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - miniLM - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # mMiniLM-L6-v2 Reranker finetuned on mMARCO ## Introduction mMiniLM-L6-v2-mmarco-v2 is a multilingual miniLM-based model finetuned on a multilingual version of MS MARCO passage dataset. This dataset, named mMARCO, is formed by passages in 9 different languages, translated from English MS MARCO passages collection. In the v2 version, the datasets were translated using Google Translate. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import AutoTokenizer, AutoModel model_name = 'unicamp-dl/mMiniLM-L6-v2-mmarco-v2' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Citation If you use mMiniLM-L6-v2-mmarco-v2, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
unicamp-dl/ptt5-base-en-pt-msmarco-10k-v1
unicamp-dl
2022-01-05T21:31:05Z
107
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "msmarco", "tensorflow", "pt", "pt-br", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - t5 - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # PTT5-base Reranker finetuned on both English and Portuguese MS MARCO ## Introduction ptt5-base-msmarco-en-pt-10k-v1 is a T5-based model pretrained in the BrWac corpus, fine-tuned on both English and Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. This model was finetuned for 10k steps. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'unicamp-dl/ptt5-base-msmarco-en-pt-10k-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) ``` # Citation If you use ptt5-base-msmarco-en-pt-10k-v1, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
unicamp-dl/mMiniLM-L6-v2-en-msmarco
unicamp-dl
2022-01-05T21:30:07Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "msmarco", "miniLM", "tensorflow", "en", "pt", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - miniLM - pytorch - tensorflow - en datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # mMiniLM-L6 Reranker finetuned on English MS MARCO ## Introduction mMiniLM-L6-v2-en-msmarco is a multilingual miniLM-based model fine-tuned on English MS MARCO passage dataset. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import AutoTokenizer, AutoModel model_name = 'unicamp-dl/mMiniLM-L6-v2-en-msmarco' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Citation If you use mMiniLM-L6-v2-en-msmarco, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
unicamp-dl/ptt5-base-pt-msmarco-100k-v1
unicamp-dl
2022-01-05T21:29:11Z
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "msmarco", "tensorflow", "pt", "pt-br", "dataset:msmarco", "arxiv:2108.13897", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: pt license: mit tags: - msmarco - t5 - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # PTT5-base Reranker finetuned on Portuguese MS MARCO ## Introduction ptt5-base-msmarco-pt-100k-v1 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. This model was finetuned for 100k steps. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) ``` # Citation If you use ptt5-base-msmarco-pt-100k-v1, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Tahsin/BERT-finetuned-conll2003-POS
Tahsin
2022-01-05T21:04:56Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-pos results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9276736387541917 - name: Recall type: recall value: 0.9329402916272412 - name: F1 type: f1 value: 0.9302995112982049 - name: Accuracy type: accuracy value: 0.933154765408842 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-pos This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.3009 - Precision: 0.9277 - Recall: 0.9329 - F1: 0.9303 - Accuracy: 0.9332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2791 | 1.0 | 1756 | 0.3125 | 0.9212 | 0.9263 | 0.9237 | 0.9272 | | 0.1853 | 2.0 | 3512 | 0.3038 | 0.9241 | 0.9309 | 0.9275 | 0.9307 | | 0.1501 | 3.0 | 5268 | 0.3009 | 0.9277 | 0.9329 | 0.9303 | 0.9332 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3