modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
kornosk/bert-election2020-twitter-stance-biden
kornosk
2022-05-02T22:59:23Z
135
2
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "twitter", "stance-detection", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT) Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
kornosk/bert-election2020-twitter-stance-trump
kornosk
2022-05-02T22:59:13Z
64
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "twitter", "stance-detection", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT) Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Trump!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Trump is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
kornosk/bert-election2020-twitter-stance-biden-KE-MLM
kornosk
2022-05-02T22:58:37Z
26
3
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "twitter", "stance-detection", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: "en" tags: - twitter - stance-detection - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM) Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Training Data This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden. # Training Objective This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden. # Usage This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden. Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch import numpy as np # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path) id2label = { 0: "AGAINST", 1: "FAVOR", 2: "NONE" } ##### Prediction Neutral ##### sentence = "Hello World." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Favor ##### sentence = "Go Go Biden!!!" inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) ##### Prediction Against ##### sentence = "Biden is the worst." inputs = tokenizer(sentence.lower(), return_tensors="pt") outputs = model(**inputs) predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() print("Sentence:", sentence) print("Prediction:", id2label[np.argmax(predicted_probability)]) print("Against:", predicted_probability[0]) print("Favor:", predicted_probability[1]) print("Neutral:", predicted_probability[2]) # please consider citing our paper if you feel this is useful :) ``` # Reference - [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021. # Citation ```bibtex @inproceedings{kawintiranon2021knowledge, title={Knowledge Enhanced Masked Language Model for Stance Detection}, author={Kawintiranon, Kornraphop and Singh, Lisa}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, year={2021}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/2021.naacl-main.376} } ```
huggingtweets/usrsistakenhelp
huggingtweets
2022-05-02T22:26:31Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-02T22:25:02Z
--- language: en thumbnail: http://www.huggingtweets.com/usrsistakenhelp/1651530363067/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1520487753896665088/lO1PwH2q_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Rosa - I miss tgamm</div> <div style="text-align: center; font-size: 14px;">@usrsistakenhelp</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Rosa - I miss tgamm. | Data | Rosa - I miss tgamm | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 507 | | Short tweets | 1160 | | Tweets kept | 1577 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jxrwgo01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @usrsistakenhelp's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1z4w7mpe/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/usrsistakenhelp') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
caush/Clickbait4
caush
2022-05-02T20:39:40Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T20:24:42Z
--- license: mit tags: - generated_from_trainer model-index: - name: Clickbait1 results: [] --- This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on the Webis-Clickbait-17 dataset. It achieves the following results on the evaluation set: Loss: 0.0261 The following list presents the current performances achieved by the participants. As primary evaluation measure, Mean Squared Error (MSE) with respect to the mean judgments of the annotators is used. Our result is 0,0261 for the MSE metric. We do not compute the other metrics. We try not to cheat using unknown data at the time of the challenge. We do not use k-fold cross validation techniques. | team | MSE | F1 | Precision | Recall| Accuracy| Runtime | |----- |----- |--- |-----------|-------|---------|-------- | |goldfish | 0.024 | 0.741 | 0.739 | 0.742 | 0.876 | 16:20:21| |caush | 0.026 | | | | | 00:11:00| |monkfish | 0.026 | 0.694 | 0.785 | 0.622 | 0.870 | 03:41:35| |dartfish | 0.027 | 0.706 | 0.733 | 0.681 | 0.865 | 00:47:07| |torpedo19 | 0.03 | 0.677 | 0.755 | 0.614 | 0.861 | 00:52:44| |albacore | 0.031 | 0.67 | 0.731 | 0.62 | 0.855 | 00:01:10| |blobfish | 0.032 | 0.646 | 0.738 | 0.574 | 0.85 | 00:03:22| |zingel | 0.033 | 0.683 | 0.719 | 0.65 | 0.856 | 00:03:27| |anchovy | 0.034 | 0.68 | 0.717 | 0.645 | 0.855 | 00:07:20| |ray | 0.034 | 0.684 | 0.691 | 0.677 | 0.851 | 00:29:28| |icarfish | 0.035 | 0.621 | 0.768 | 0.522 | 0.849 | 01:02:57| |emperor | 0.036 | 0.641 | 0.714 | 0.581 | 0.845 | 00:04:03| |carpetshark | 0.036 | 0.638 | 0.728 | 0.568 | 0.847 | 00:08:05| |electriceel | 0.038 | 0.588 | 0.727 | 0.493 | 0.835 | 01:04:54| |arowana | 0.039 | 0.656 | 0.659 | 0.654 | 0.837 | 00:35:24| |pineapplefish | 0.041 | 0.631 | 0.642 | 0.621 | 0.827 | 00:54:28| |whitebait | 0.043 | 0.565 | 0.7 | 0.474 | 0.826 | 00:04:31|
caush/Clickbait1
caush
2022-05-02T20:36:10Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-26T18:25:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: Clickbait1 results: [] --- # Clickbait1 This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [Webis-Clickbait-17](https://zenodo.org/record/5530410) dataset. It achieves the following results on the evaluation set: - Loss: 0.0257 ## Model description MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers". We fine tune this model to evaluate (regression) the clickbait level of title news. ## Intended uses & limitations Model looks like the model described in the paper [Predicting Clickbait Strength in Online Social Media](https://aclanthology.org/2020.coling-main.425/) by Indurthi Vijayasaradhi, Syed Bakhtiyar, Gupta Manish, Varma Vasudeva. The model was trained with english titles. ## Training and evaluation data We trained the model with the official training data for the chalenge (clickbait17-train-170630.zip (894 MiB, 19538 posts), plus another set that was just available after the end of the challenge (clickbait17-train-170331.zip (157 MiB, 2459 posts). ## Training procedure Code can be find in [Github](https://github.com/caush/Clickbait). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.05 | 50 | 0.0571 | | No log | 0.09 | 100 | 0.0448 | | No log | 0.14 | 150 | 0.0391 | | No log | 0.18 | 200 | 0.0326 | | No log | 0.23 | 250 | 0.0343 | | No log | 0.27 | 300 | 0.0343 | | No log | 0.32 | 350 | 0.0343 | | No log | 0.36 | 400 | 0.0346 | | No log | 0.41 | 450 | 0.0343 | | 0.0388 | 0.46 | 500 | 0.0297 | | 0.0388 | 0.5 | 550 | 0.0293 | | 0.0388 | 0.55 | 600 | 0.0301 | | 0.0388 | 0.59 | 650 | 0.0290 | | 0.0388 | 0.64 | 700 | 0.0326 | | 0.0388 | 0.68 | 750 | 0.0285 | | 0.0388 | 0.73 | 800 | 0.0285 | | 0.0388 | 0.77 | 850 | 0.0275 | | 0.0388 | 0.82 | 900 | 0.0314 | | 0.0388 | 0.87 | 950 | 0.0309 | | 0.0297 | 0.91 | 1000 | 0.0277 | | 0.0297 | 0.96 | 1050 | 0.0281 | | 0.0297 | 1.0 | 1100 | 0.0273 | | 0.0297 | 1.05 | 1150 | 0.0270 | | 0.0297 | 1.09 | 1200 | 0.0291 | | 0.0297 | 1.14 | 1250 | 0.0293 | | 0.0297 | 1.18 | 1300 | 0.0269 | | 0.0297 | 1.23 | 1350 | 0.0276 | | 0.0297 | 1.28 | 1400 | 0.0279 | | 0.0297 | 1.32 | 1450 | 0.0267 | | 0.0265 | 1.37 | 1500 | 0.0270 | | 0.0265 | 1.41 | 1550 | 0.0300 | | 0.0265 | 1.46 | 1600 | 0.0274 | | 0.0265 | 1.5 | 1650 | 0.0274 | | 0.0265 | 1.55 | 1700 | 0.0266 | | 0.0265 | 1.59 | 1750 | 0.0267 | | 0.0265 | 1.64 | 1800 | 0.0267 | | 0.0265 | 1.68 | 1850 | 0.0280 | | 0.0265 | 1.73 | 1900 | 0.0274 | | 0.0265 | 1.78 | 1950 | 0.0272 | | 0.025 | 1.82 | 2000 | 0.0261 | | 0.025 | 1.87 | 2050 | 0.0268 | | 0.025 | 1.91 | 2100 | 0.0268 | | 0.025 | 1.96 | 2150 | 0.0259 | | 0.025 | 2.0 | 2200 | 0.0257 | | 0.025 | 2.05 | 2250 | 0.0260 | | 0.025 | 2.09 | 2300 | 0.0263 | | 0.025 | 2.14 | 2350 | 0.0262 | | 0.025 | 2.19 | 2400 | 0.0269 | | 0.025 | 2.23 | 2450 | 0.0262 | | 0.0223 | 2.28 | 2500 | 0.0262 | | 0.0223 | 2.32 | 2550 | 0.0267 | | 0.0223 | 2.37 | 2600 | 0.0260 | | 0.0223 | 2.41 | 2650 | 0.0260 | | 0.0223 | 2.46 | 2700 | 0.0259 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.1.0 - Tokenizers 0.12.1
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:29:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:27:39Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_editorials_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.8119 - Precision: 0.2752 - Recall: 0.9522 - F1: 0.4270 - Accuracy: 0.2849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 166 | 0.0726 | 0.9827 | 1.0 | 0.9913 | 0.9828 | | No log | 2.0 | 332 | 0.0569 | 0.9827 | 1.0 | 0.9913 | 0.9828 | | No log | 3.0 | 498 | 0.0434 | 0.9884 | 1.0 | 0.9942 | 0.9885 | | 0.1021 | 4.0 | 664 | 0.0505 | 0.9884 | 1.0 | 0.9942 | 0.9885 | | 0.1021 | 5.0 | 830 | 0.0472 | 0.9884 | 1.0 | 0.9942 | 0.9885 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
LACAI/roberta-large-adapted-PFG-progression
LACAI
2022-05-02T18:28:47Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:09:17Z
--- license: mit --- Base model: [lacai/roberta-large-dialog-narrative](https://huggingface.co/lacai/roberta-large-dialog-narrative) Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019): Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task. This model inherits a special dialogue token `<d>` from its base model, which indicates the start of a dialogue utterance. **Example input**: `<d>How are you?</s><d>Good! how about yourself?</s><d>Great. Would you like to donate today to help the children?</s>` For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression).
ali2066/DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T18:23:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T18:22:28Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT_FINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7321 - Precision: 0.9795 - Recall: 0.7277 - F1: 0.835 - Accuracy: 0.7208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 130 | 0.3755 | 0.8521 | 0.9910 | 0.9163 | 0.8529 | | No log | 2.0 | 260 | 0.3352 | 0.8875 | 0.9638 | 0.9241 | 0.8713 | | No log | 3.0 | 390 | 0.3370 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | | 0.4338 | 4.0 | 520 | 0.3415 | 0.8957 | 0.9321 | 0.9135 | 0.8566 | | 0.4338 | 5.0 | 650 | 0.3416 | 0.8918 | 0.9321 | 0.9115 | 0.8529 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
espnet/tamil_slu
espnet
2022-05-02T18:09:16Z
1
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "dataset:tamil", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T18:00:45Z
--- tags: - espnet - audio - automatic-speech-recognition language: noinfo datasets: - tamil license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/tamil_slu` This model was trained by Sujay S Kumar using tamil recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 395bda6123ae268f991e5ef1dab887b6e677974a pip install -e . cd egs2/tamil/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/tamil_slu ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Oct 3 20:59:46 EDT 2021` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c` - Commit date: `Wed Sep 22 10:02:03 2021 -0400` ## asr_train_asr_wav2vec2_xlsr_raw_word ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|80|372|70.4|22.6|7.0|3.2|32.8|56.3| |inference_asr_model_valid.acc.ave_5best/valid|80|372|70.4|22.6|7.0|3.2|32.8|56.3| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/test|80|3234|85.9|8.2|5.9|5.5|19.6|56.3| |inference_asr_model_valid.acc.ave_5best/valid|80|3234|85.9|8.2|5.9|5.5|19.6|56.3| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_wav2vec2_xlsr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp_train_asr_wav2vec2_xlsr/asr_train_asr_wav2vec2_xlsr_raw_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 250 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/speech_shape - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/train/text_shape.word valid_shape_file: - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/speech_shape - exp_train_asr_wav2vec2_xlsr/asr_stats_raw_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 5000 token_list: - <blank> - <unk> - காசு - வேணும் - Request_Acc_balance - Account - Money_deposit - Money_withdraw - Credit_card_payments - card - மீதி - Money_transfer - எவ்வளோ - Bill_payments - Credit - கட்ட - எவ்வளவு - காச - கட்டவேணும் - இந்த - Balance - வேண்டும் - போடோணும் - கணக்கு - செய்ய - Bill - போட - account - மாத்த - credit - pay - பண்ணோணும் - Deposit - மீளெடுக்க - வைப்பு - எடுக்கவேணும் - ல - இருக்கிற - எடுக்கணும் - இல - இருந்து - மற்ற - accountக்கு - balance - என்ன - bill - அ - ஒருக்கா - ஏலுமோ - deposit - பண்ண - payment - Account-la - காசெடுக்கோணும் - அனுப்பவேணும் - காசெடுக்க - இன்னொரு - கு - Cash - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: wav2vec2_xlsr download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.3a3 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
wpatena/PB-Chlamy
wpatena
2022-05-02T16:34:01Z
0
0
null
[ "region:us" ]
null
2022-04-12T22:35:19Z
These are files for the trained protein localization prediction model PB-Chlamy, created for the paper **"A Chloroplast Protein Atlas Reveals Novel Structures and Spatial Organization of Biosynthetic Pathways"** by Lianyong Wang, Weronika Patena, Kelly A. Van Baalen, Yihua Xie, Emily R. Singer, Sophia Gavrilenko, Michelle Warren-Williams, Linqu Han, Henry Harrigan, Vivian Chen, Vinh Ton, Saw Kyin, Henry H. Shwe, Matthew H. Cahn, Alexandra Wilson, Jianping Hu, Christoph Benning, Danny J. Schnell, Claire D. McWhite, Martin Jonikas (submitted for publication in May 2022).
espnet/thai_commonvoice_blstm
espnet
2022-05-02T15:53:53Z
4
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "th", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:16:52Z
--- tags: - espnet - audio - automatic-speech-recognition language: th datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/thai_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/thai_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Apr 18 11:05:12 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_train_asr_rnn_raw_th_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_th|10769|14356|49.0|43.1|7.9|5.1|56.0|53.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_th|10769|348793|95.2|3.0|1.8|1.8|6.6|53.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_th|10769|278454|95.0|2.8|2.2|1.1|6.1|41.2| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_raw_th_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_th_bpe150_sp/train/speech_shape - exp/asr_stats_raw_th_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_th_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_th_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_th_sp/wav.scp - speech - sound - - dump/raw/train_th_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_th/wav.scp - speech - sound - - dump/raw/dev_th/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - น - ร - ก - า - เ - อ - ง - ย - ม - ั - ส - ด - บ - ว - ิ - ล - ค - ต - ห - ่ - ท - ้ - พ - ช - แ - ี - จ - ะ - ที่ - ุ - ้า - ู - ์ - ป - ข - ไ - การ - โ - ไม่ - ่อ - ่า - ็ - ื - ํา - ือ - จะ - มา - ของ - ได้ - เป็น - ถ - ีย - มี - ่ง - ว่า - ้อ - ัน - ใน - ไป - คุณ - ▁ฉัน - ัง - เขา - ความ - ใ - ผ - หน - ให้ - ทํา - ศ - ซ - ึ - นี้ - ฉัน - มัน - ี่ - ญ - และ - ประ - ิน - หล - ษ - ภ - ธ - ณ - ฟ - อย่าง - เธอ - '?' - '"' - ฐ - '!' - ฝ - ฉ - ฮ - ๊ - '''' - '-' - ฒ - ๆ - ๋ - ฎ - ฤ - ฏ - ฬ - ฑ - . - ” - ':' - “ - ',' - ’ - ; - ฌ - E - R - O - T - N - A - I - S - F - C - '~' - B - K - X - L - H - M - Y - — - J - W - ฃ - _ - ฯ - ํ - U - ๅ - ‘ - G - '|' - P - ฆ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/th_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_th_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/farsi_commonvoice_blstm
espnet
2022-05-02T15:50:24Z
5
3
espnet
[ "espnet", "audio", "automatic-speech-recognition", "fa", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:49:22Z
--- tags: - espnet - audio - automatic-speech-recognition language: fa datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/farsi_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/farsi_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon May 2 11:48:56 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b` - Commit date: `Thu Apr 28 19:50:59 2022 -0400` ## asr_train_asr_rnn_raw_fa_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|68904|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|68904|91.4|7.2|1.4|1.0|9.5|30.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|331506|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|331506|97.2|1.3|1.5|0.7|3.6|30.1| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_fa|9728|230963|0.0|0.0|100.0|0.0|100.0|100.0| |decode_rnn_asr_model_valid.acc.best/test_fa|9728|230963|95.9|2.4|1.6|0.7|4.7|30.1| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_raw_fa_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_fa_bpe150_sp/train/speech_shape - exp/asr_stats_raw_fa_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_fa_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_fa_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_fa_sp/wav.scp - speech - sound - - dump/raw/train_fa_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_fa/wav.scp - speech - sound - - dump/raw/dev_fa/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ی - ا - ه - ▁ - ر - م - و - د - ت - ش - ن - ل - ▁ب - ز - ب - . - ▁م - ان - ▁ا - س - ک - ▁می - گ - ف - ▁د - ؟ - ق - ▁و - ید - ▁ن - ند - ست - ار - ▁چ - ع - ج - ▁ت - ▁ک - ▁با - خ - ون - ▁پ - ▁به - ▁من - ▁س - ▁را - ، - ▁خ - ▁این - ▁کن - ▁آ - ▁در - ای - ▁از - اد - ▁است - ح - ص - ▁ش - ط - ▁تو - ین - ▁دار - ▁که - ال - ▁رو - ▁گ - ▁ج - ور - ام - ▁هم - ▁ح - فت - رد - یم - پ - غ - چ - ذ - ض - ظ - '!' - ث - ً - ئ - '"' - ژ - ك - آ - ي - ':' - ى - '-' - ِ - أ - َ - » - ـ - ',' - ُ - ( - ) - ء - ٔ - ٬ - ّ - ؛ - B - C - A - E - G - M - S - ؤ - I - ; - T - H - _ - F - D - ۀ - Y - N - K - U - – - ٌ - P - O - Q - Z - '&' - L - R - ة - X - ā - '#' - “ - '=' - « - š - ْ - ے - ” - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/fa_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_fa_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/pt_commonvoice_blstm
espnet
2022-05-02T15:39:16Z
3
1
espnet
[ "espnet", "audio", "automatic-speech-recognition", "pt", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:37:14Z
--- tags: - espnet - audio - automatic-speech-recognition language: pt datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/pt_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/pt_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Apr 11 18:55:23 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_train_asr_rnn_raw_pt_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_pt|4334|33716|84.7|12.4|2.9|1.3|16.6|46.8| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_pt|4334|191499|93.4|3.0|3.6|1.2|7.8|46.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_pt|4334|116003|90.4|5.7|3.9|1.5|11.1|46.9| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_raw_pt_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_pt_bpe150_sp/train/speech_shape - exp/asr_stats_raw_pt_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_pt_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_pt_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_pt_sp/wav.scp - speech - sound - - dump/raw/train_pt_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_pt/wav.scp - speech - sound - - dump/raw/dev_pt/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - S - R - I - U - E - O - A - . - N - M - L - ▁A - ▁DE - RA - ▁O - T - ▁E - ▁UM - C - TA - DO - G - TO - TE - DA - VE - B - NDO - ▁SE - ▁QUE - P - ▁UMA - LA - D - ▁COM - CA - á - '?' - ▁PE - ▁EM - IN - TI - IS - ▁C - H - HO - ▁CA - ▁P - CO - ',' - ▁NO - MA - NTE - PA - ▁NãO - DE - ãO - ▁ME - ▁PARA - Z - ▁MA - VA - PO - ▁DO - ▁VOCê - RI - ▁DI - GA - VI - ▁é - LO - IA - ▁ELE - ▁EU - ▁ESTá - HA - ▁M - X - ▁NA - NA - é - CE - LE - GO - VO - ▁RE - ▁FO - ▁FA - ▁CO - QUE - ▁EST - BE - ▁CON - ó - SE - ▁POR - ê - í - çãO - ▁DA - RES - ▁QUA - ▁HOMEM - RIA - çA - ▁SA - V - ▁PRE - MENTE - ZE - NHA - '-' - ▁BA - MOS - ▁SO - ▁BO - ç - '"' - '!' - ú - ã - K - Y - É - W - ô - Á - ':' - ; - '''' - ” - Ô - ñ - “ - Ú - Í - Ó - ü - À - â - à - õ - J - Q - F - Â - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/pt_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_pt_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/greek_commonvoice_blstm
espnet
2022-05-02T15:35:07Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "el", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-05-02T15:34:01Z
--- tags: - espnet - audio - automatic-speech-recognition language: el datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/greek_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/greek_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Apr 17 19:51:46 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_train_asr_rnn_tr_raw_el_bpe150_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|10574|90.7|7.7|1.6|0.5|9.9|27.4| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|61731|96.6|1.5|1.9|0.6|4.0|27.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_el|1681|44869|95.7|2.4|1.9|0.7|5.0|27.5| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn_tr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_tr_raw_el_bpe150_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_el_bpe150_sp/train/speech_shape - exp/asr_stats_raw_el_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_el_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_el_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_el_sp/wav.scp - speech - sound - - dump/raw/train_el_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_el/wav.scp - speech - sound - - dump/raw/dev_el/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - α - ν - ρ - ι - ε - ο - τ - ς - λ - ά - σ - κ - ό - . - ί - ▁π - έ - ω - π - γ - η - μ - υ - ',' - ή - ▁το - χ - θ - ώ - ▁και - ▁του - δ - τα - αν - ει - ▁να - ▁σ - ου - σε - ▁κ - ύ - ού - φ - στ - ρα - ια - ▁μ - ▁δ - ▁έ - τι - β - ρι - μα - πο - εί - ▁φ - ▁με - κα - ▁α - ος - ; - ▁χ - '!' - ▁β - ες - ▁στο - τε - ▁γ - '"' - τη - ▁ο - ▁Π - ▁δε - ▁που - ▁μου - με - ▁τα - σα - λα - Μ - ιά - ▁από - εις - ▁την - έρ - ▁ε - ▁τον - ρά - λο - ▁είπε - ▁μα - ψ - Τ - '''' - Κ - Σ - Ε - Α - Θ - '-' - Η - Ά - Ν - Δ - Χ - ’ - Ξ - » - Π - ΐ - Ώ - Ο - A - O - · - ':' - E - G - H - N - R - T - V - Υ - ϋ - Ψ - ́ - ‘ - Ι - Ί - Ρ - Ω - « - Ύ - Ζ - ϊ - Ή - Φ - Λ - Ό - Γ - Έ - Β - ζ - M - ξ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/el_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_el_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
fahadtouseef/wav2vec2-base-timit-demo-colab_2
fahadtouseef
2022-05-02T14:18:38Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-02T11:50:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab_2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3801 - Wer: 0.3035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7227 | 3.52 | 500 | 2.6961 | 1.0 | | 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 | | 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 | | 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 | | 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 | | 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 | | 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 | | 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
umanlp/TOD-XLMR
umanlp
2022-05-02T14:16:51Z
13
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "exbert", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-21T09:29:28Z
--- tags: - exbert language: multilingual license: mit --- # TOD-XLMR TOD-XLMR is a conversationally specialized multilingual version based on [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base). It is pre-trained on English conversational corpora consisting of nine human-to-human multi-turn task-oriented dialog (TOD) datasets as proposed in the paper [TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue](https://aclanthology.org/2020.emnlp-main.66.pdf) by Wu et al. and first released in [this repository](https://huggingface.co/TODBERT). The model is jointly trained with two objectives as proposed in TOD-BERT, including masked language modeling (MLM) and response contrastive loss (RCL). Masked language modeling is a common pretraining strategy utilized for BERT-based architectures, where a random sample of tokens in the input sequence is replaced with the special token [MASK] for predicting the original masked tokens. To further encourage the model to capture dialogic structure (i.e., dialog sequential order), response contrastive loss is implemented by using in-batch negative training with contrastive learning. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ``` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR") model = AutoModelForMaskedLM.from_pretrained("umanlp/TOD-XLMR") # prepare input text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ``` Or you can also use `AutoModel` to load the pretrained model and further apply to downstream tasks: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR") model = AutoModel("umanlp/TOD-XLMR") # prepare input text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ```
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T14:00:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:19:37Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERTFINAL_ctxSentence_TRAIN_all_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0557 - Precision: 0.9930 - Recall: 0.9878 - F1: 0.9904 - Accuracy: 0.9814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 479 | 0.3334 | 0.9041 | 0.9041 | 0.9041 | 0.8550 | | 0.3756 | 2.0 | 958 | 0.3095 | 0.8991 | 0.9251 | 0.9119 | 0.8649 | | 0.2653 | 3.0 | 1437 | 0.3603 | 0.8929 | 0.9527 | 0.9218 | 0.8779 | | 0.1991 | 4.0 | 1916 | 0.3907 | 0.8919 | 0.9540 | 0.9219 | 0.8779 | | 0.1586 | 5.0 | 2395 | 0.3642 | 0.9070 | 0.9356 | 0.9211 | 0.8788 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T13:37:28Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:12:40Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERTFINAL_ctxSentence_TRAIN_webDiscourse_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2555 - Precision: 1.0 - Recall: 0.0200 - F1: 0.0393 - Accuracy: 0.0486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 95 | 0.5756 | nan | 0.0 | nan | 0.715 | | No log | 2.0 | 190 | 0.5340 | 0.6429 | 0.1579 | 0.2535 | 0.735 | | No log | 3.0 | 285 | 0.5298 | 0.5833 | 0.3684 | 0.4516 | 0.745 | | No log | 4.0 | 380 | 0.5325 | 0.5789 | 0.3860 | 0.4632 | 0.745 | | No log | 5.0 | 475 | 0.5452 | 0.4815 | 0.4561 | 0.4685 | 0.705 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False
ali2066
2022-05-02T13:33:27Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T13:10:30Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERTFINAL_ctxSentence_TRAIN_essays_TEST_NULL_second_train_set_null_False This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7680 - Precision: 0.9838 - Recall: 0.6632 - F1: 0.7923 - Accuracy: 0.6624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 130 | 0.2980 | 0.9315 | 0.9533 | 0.9423 | 0.9081 | | No log | 2.0 | 260 | 0.2053 | 0.9537 | 0.9626 | 0.9581 | 0.9338 | | No log | 3.0 | 390 | 0.1873 | 0.9464 | 0.9907 | 0.9680 | 0.9485 | | 0.3064 | 4.0 | 520 | 0.1811 | 0.9585 | 0.9720 | 0.9652 | 0.9449 | | 0.3064 | 5.0 | 650 | 0.1887 | 0.9587 | 0.9766 | 0.9676 | 0.9485 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab92
hassnain
2022-05-02T11:09:44Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T12:40:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab92 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab92 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6596 - eval_wer: 0.4164 - eval_runtime: 55.6472 - eval_samples_per_second: 12.615 - eval_steps_per_second: 1.581 - epoch: 2.85 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
cfilt/HiNER-original-xlm-roberta-large
cfilt
2022-05-02T10:19:28Z
90
1
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:cfilt/HiNER-original", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-01T07:38:35Z
--- tags: - generated_from_trainer datasets: - cfilt/HiNER-original metrics: - precision - recall - f1 model-index: - name: HiNER-original-xlm-roberta-large results: - task: name: Token Classification type: token-classification dataset: type: cfilt/HiNER-original name: HiNER Original metrics: - name: Precision type: precision value: 0.8968858782575971 - name: Recall type: recall value: 0.8871207891308394 - name: F1 type: f1 value: 0.8919766081871345 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HiNER-original-xlm-roberta-large This model was trained from scratch on HiNER-original dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.14.0 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
kyryl0s/gpt2-uk-xxs
kyryl0s
2022-05-02T09:14:29Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "uk", "license:afl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-06T14:04:49Z
--- license: afl-3.0 language: uk --- ## GPT2 being trained on Ukrainian news. ### General info: The model is not ready yet but I'm working on it. It also has a relatively small context window, which makes it quite uninteresting. ### Example of usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("kyryl0s/gpt2-uk-xxs") model = AutoModelForCausalLM.from_pretrained("kyryl0s/gpt2-uk-xxs") input_ids = tokenizer.encode("Путін — ", add_special_tokens=False, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, num_return_sequences=3, max_length=50 ) for i, out in enumerate(outputs): print("{}: {}".format(i, tokenizer.decode(out))) ```
driboune/skin_type
driboune
2022-05-02T08:08:40Z
183
3
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-04-29T15:59:55Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: skin_type results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8222222328186035 --- # skin_type Aiming for fairness in image classification for humans, knowing the skin type of subjects is relevant to make sure the model performs correctly on all skin types. Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dark skin ![dark skin](images/dark_skin.jpg) #### light skin ![light skin](images/light_skin.jpg)
crcb/emo_go_new
crcb
2022-05-02T04:17:02Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain", "unk", "dataset:crcb/autotrain-data-go_emo_new", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T04:07:25Z
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - crcb/autotrain-data-go_emo_new co2_eq_emissions: 20.58663910106142 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 813325491 - CO2 Emissions (in grams): 20.58663910106142 ## Validation Metrics - Loss: 1.3628994226455688 - Accuracy: 0.5920355494787216 - Macro F1: 0.4844439507523978 - Micro F1: 0.5920355494787216 - Weighted F1: 0.5873137663478112 - Macro Precision: 0.5458988948121151 - Micro Precision: 0.5920355494787216 - Weighted Precision: 0.591386299522425 - Macro Recall: 0.4753100798358001 - Micro Recall: 0.5920355494787216 - Weighted Recall: 0.5920355494787216 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crcb/autotrain-go_emo_new-813325491 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("crcb/autotrain-go_emo_new-813325491", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
DioLiu/distilbert-base-uncased-finetuned-sst2
DioLiu
2022-05-02T03:06:36Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-02T02:28:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.8967889908256881 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5963 - Accuracy: 0.8968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.247 | 1.0 | 1404 | 0.3629 | 0.8865 | | 0.1532 | 2.0 | 2808 | 0.3945 | 0.8979 | | 0.0981 | 3.0 | 4212 | 0.4206 | 0.9025 | | 0.0468 | 4.0 | 5616 | 0.5358 | 0.9014 | | 0.0313 | 5.0 | 7020 | 0.5963 | 0.8968 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Ghost1/bert-finetuned-squad1
Ghost1
2022-05-02T02:28:59Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-05-02T00:04:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
voodooMaestro/finetuned-stories
voodooMaestro
2022-05-02T00:24:29Z
4
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-01T23:31:33Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: voodooMaestro/finetuned-stories results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # voodooMaestro/finetuned-stories This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9188 - Validation Loss: 1.5604 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9188 | 1.5604 | 0 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
SebastianS/bert-finetuned-ner
SebastianS
2022-05-01T21:38:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-01T21:12:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Accuracy type: accuracy value: 0.9910634321093416 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0452 - Accuracy: 0.9911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0544 | 1.0 | 1756 | 0.0440 | 0.9892 | | 0.0246 | 2.0 | 3512 | 0.0417 | 0.9906 | | 0.0105 | 3.0 | 5268 | 0.0452 | 0.9911 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
cfilt/HiNER-collapsed-muril-base-cased
cfilt
2022-05-01T19:48:15Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:cfilt/HiNER-collapsed", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-29T17:19:39Z
--- tags: - generated_from_trainer datasets: - cfilt/HiNER-collapsed metrics: - precision - recall - f1 model-index: - name: HiNER-collapsed-muril-base-cased results: - task: name: Token Classification type: token-classification dataset: type: cfilt/HiNER-collapsed name: HiNER Collapsed metrics: - name: Precision type: precision value: 0.9049101352603298 - name: Recall type: recall value: 0.9209156735555891 - name: F1 type: f1 value: 0.9128427506027924 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HiNER-collapsed-muril-base-cased This model was trained from scratch on the cfilt/HiNER-collapsed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.14.0 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
cfilt/HiNER-collapsed-xlm-roberta-large
cfilt
2022-05-01T19:47:49Z
95
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:cfilt/HiNER-collapsed", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-01T06:43:57Z
--- tags: - generated_from_trainer datasets: - cfilt/HiNER-collapsed metrics: - precision - recall - f1 model-index: - name: HiNER-collapsed-xlm-roberta-base results: - task: name: Token Classification type: token-classification dataset: type: cfilt/HiNER-collapsed name: HiNER Collapsed metrics: - name: Precision type: precision value: 0.9137448834064936 - name: Recall type: recall value: 0.9296549644788663 - name: F1 type: f1 value: 0.9216312652954473 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HiNER-collapsed-xlm-roberta-base This model was trained from scratch on the cfilt/HiNER-collapsed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.14.0 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
tomh/toxigen_roberta
tomh
2022-05-01T19:42:09Z
17,839
8
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "arxiv:2203.09509", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-01T13:19:41Z
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can be used to detect implicit hate speech. Please visit the [Github Repository](https://github.com/microsoft/TOXIGEN) for the training dataset and further details. ```bibtex @inproceedings{hartvigsen2022toxigen, title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", year = "2022" } ```
voidism/diffcse-roberta-base-sts
voidism
2022-05-01T19:30:19Z
8
1
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "arxiv:2204.10298", "arxiv:2104.08821", "arxiv:2111.00899", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-04-14T15:19:51Z
--- license: apache-2.0 --- # DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings [![GitHub Stars](https://img.shields.io/github/stars/voidism/DiffCSE?style=social)](https://github.com/voidism/DiffCSE/) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb) arXiv link: https://arxiv.org/abs/2204.10298 To be published in [**NAACL 2022**](https://2022.naacl.org/) Authors: [Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/), [Rumen Dangovski](http://super-ms.mit.edu/rumen.html), [Hongyin Luo](http://people.csail.mit.edu/hyluo/), [Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/), [Shiyu Chang](https://code-terminator.github.io/), [Marin Soljačić](http://www.mit.edu/~soljacic/marin.html), [Shang-Wen Li](https://swdanielli.github.io/), [Scott Wen-tau Yih](https://scottyih.org/), [Yoon Kim](https://people.csail.mit.edu/yoonkim/), [James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml) Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information. ## Overview ![DiffCSE](https://github.com/voidism/DiffCSE/raw/master/diffcse.png) We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. ## Setups [![Python](https://img.shields.io/badge/python-3.9.5-blue?logo=python&logoColor=FED643)](https://www.python.org/downloads/release/python-395/) ### Requirements * Python 3.9.5 ### Install our customized Transformers package ``` cd transformers-4.2.1 pip install . ``` > If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`. > We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package. ### Install other packages ``` pip install -r requirements.txt ``` ### Download the pretraining dataset ``` cd data bash download_wiki.sh ``` ### Download the downstream dataset ``` cd SentEval/data/downstream/ bash download_dataset.sh ``` ## Training (The same as `run_diffcse.sh`.) ```bash python train.py \ --model_name_or_path bert-base-uncased \ --generator_name distilbert-base-uncased \ --train_file data/wiki1m_for_simcse.txt \ --output_dir <your_output_model_dir> \ --num_train_epochs 2 \ --per_device_train_batch_size 64 \ --learning_rate 7e-6 \ --max_seq_length 32 \ --evaluation_strategy steps \ --metric_for_best_model stsb_spearman \ --load_best_model_at_end \ --eval_steps 125 \ --pooler_type cls \ --mlp_only_train \ --overwrite_output_dir \ --logging_first_step \ --logging_dir <your_logging_dir> \ --temp 0.05 \ --do_train \ --do_eval \ --batchnorm \ --lambda_weight 0.005 \ --fp16 --masking_ratio 0.30 ``` Our new arguments: * `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper. * `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens. * `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`. Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE): * `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`). * `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`). * `--temp`: Temperature for the contrastive loss. We always use `0.05`. * `--pooler_type`: Pooling method. * `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models. For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance. ## Evaluation [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb) We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation: ```bash python evaluation.py \ --model_name_or_path <your_output_model_dir> \ --pooler cls_before_pooler \ --task_set <sts|transfer|full> \ --mode test ``` To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts: ### BERT #### STS ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-bert-base-uncased-sts \ --pooler cls_before_pooler \ --task_set sts \ --mode test ``` #### Transfer Tasks ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-bert-base-uncased-trans \ --pooler cls_before_pooler \ --task_set transfer \ --mode test ``` ### RoBERTa #### STS ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-roberta-base-sts \ --pooler cls_before_pooler \ --task_set sts \ --mode test ``` #### Transfer Tasks ```bash python evaluation.py \ --model_name_or_path voidism/diffcse-roberta-base-trans \ --pooler cls_before_pooler \ --task_set transfer \ --mode test ``` For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE). ## Pretrained models [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97-Models-yellow)](https://huggingface.co/voidism) * DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts * DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans * DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts * DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE). See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information. ```python from diffcse import DiffCSE model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts") model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans") model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts") model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans") ``` ## Citations [![DOI](https://img.shields.io/badge/DOI-10.48550/arXiv.2204.10298-green?color=FF8000?color=009922)](https://doi.org/10.48550/arXiv.2204.10298) Please cite our paper and the SimCSE paper if they are helpful to your work! ```bibtex @inproceedings{chuang2022diffcse, title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings}, author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James}, booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2022} } @inproceedings{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2021} } ```
ietz/token-paraphrase-MiniLM-L6-v2
ietz
2022-05-01T19:28:23Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-05T19:46:25Z
--- license: apache-2.0 ---
hassnain/wav2vec2-base-timit-demo-colab57
hassnain
2022-05-01T18:17:07Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T17:06:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab57 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab57 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7328 - Wer: 0.4593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9876 | 7.04 | 500 | 3.1483 | 1.0 | | 1.4621 | 14.08 | 1000 | 0.6960 | 0.6037 | | 0.4404 | 21.13 | 1500 | 0.6392 | 0.5630 | | 0.2499 | 28.17 | 2000 | 0.6738 | 0.5281 | | 0.1732 | 35.21 | 2500 | 0.6789 | 0.4952 | | 0.1347 | 42.25 | 3000 | 0.7328 | 0.4835 | | 0.1044 | 49.3 | 3500 | 0.7258 | 0.4840 | | 0.0896 | 56.34 | 4000 | 0.7328 | 0.4593 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab53
hassnain
2022-05-01T17:13:03Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T14:11:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab53 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab53 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2003 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.619 | 7.04 | 500 | 3.2338 | 1.0 | | 3.1855 | 14.08 | 1000 | 3.1968 | 1.0 | | 3.1669 | 21.13 | 1500 | 3.1796 | 1.0 | | 3.1586 | 28.17 | 2000 | 3.2003 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
rjuez00/meddocan-beto-ner
rjuez00
2022-05-01T16:23:58Z
8
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-05-01T16:21:07Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: beto_full_train_3_epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beto_full_train_3_epochs This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0445 - Precision: 0.9541 - Recall: 0.9481 - F1: 0.9511 - Accuracy: 0.9951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.11.6
Siyam/SKYLy
Siyam
2022-05-01T16:02:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T08:47:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: SKYLy results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SKYLy This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7645 - Wer: 0.4083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.4215 | 4.26 | 400 | 1.6323 | 0.9857 | | 0.5716 | 8.51 | 800 | 0.6679 | 0.5107 | | 0.1721 | 12.77 | 1200 | 0.6935 | 0.4632 | | 0.1063 | 17.02 | 1600 | 0.7533 | 0.4432 | | 0.0785 | 21.28 | 2000 | 0.7208 | 0.4255 | | 0.0608 | 25.53 | 2400 | 0.7481 | 0.4117 | | 0.0493 | 29.79 | 2800 | 0.7645 | 0.4083 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab9
hassnain
2022-05-01T15:58:30Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T09:32:36Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab9 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1922 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:---:| | 5.0683 | 1.42 | 500 | 3.2471 | 1.0 | | 3.1349 | 2.85 | 1000 | 3.2219 | 1.0 | | 3.1317 | 4.27 | 1500 | 3.2090 | 1.0 | | 3.1262 | 5.7 | 2000 | 3.2152 | 1.0 | | 3.1307 | 7.12 | 2500 | 3.2147 | 1.0 | | 3.1264 | 8.55 | 3000 | 3.2072 | 1.0 | | 3.1279 | 9.97 | 3500 | 3.2158 | 1.0 | | 3.1287 | 11.4 | 4000 | 3.2190 | 1.0 | | 3.1256 | 12.82 | 4500 | 3.2069 | 1.0 | | 3.1254 | 14.25 | 5000 | 3.2134 | 1.0 | | 3.1259 | 15.67 | 5500 | 3.2231 | 1.0 | | 3.1269 | 17.09 | 6000 | 3.2005 | 1.0 | | 3.1279 | 18.52 | 6500 | 3.1988 | 1.0 | | 3.1246 | 19.94 | 7000 | 3.1929 | 1.0 | | 3.128 | 21.37 | 7500 | 3.1864 | 1.0 | | 3.1245 | 22.79 | 8000 | 3.1868 | 1.0 | | 3.1266 | 24.22 | 8500 | 3.1852 | 1.0 | | 3.1239 | 25.64 | 9000 | 3.1855 | 1.0 | | 3.125 | 27.07 | 9500 | 3.1917 | 1.0 | | 3.1233 | 28.49 | 10000 | 3.1929 | 1.0 | | 3.1229 | 29.91 | 10500 | 3.1922 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab647
hassnain
2022-05-01T15:54:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T14:42:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab647 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab647 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5534 - Wer: 0.4799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.2072 | 7.04 | 500 | 3.7757 | 1.0 | | 1.2053 | 14.08 | 1000 | 0.6128 | 0.5648 | | 0.3922 | 21.13 | 1500 | 0.5547 | 0.5035 | | 0.2157 | 28.17 | 2000 | 0.5534 | 0.4799 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Yanael/bert-finetuned-mrpc
Yanael
2022-05-01T15:25:05Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-01T14:54:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model-index: - name: bert-finetuned-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.8.1+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
Rodion/sbert_uno_sustainable_development_goals
Rodion
2022-05-01T14:33:23Z
64
3
transformers
[ "transformers", "pytorch", "mpnet", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-04-26T05:14:40Z
The SBERT model was trained on the dataset of UNO sustainable development goals. The total dataset size is 20000 records. 16000 were used for training and 4000 for evaluation. The similarity between records was calculated based on the class similarity: 0 (case 1 - no common classes) (number of common classes)/(number of all classes) (case 2) (number of common classes)/(maximal number of record classes)+(number of common classes)/(number of all classes) (case 3) --- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 219 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 2, "evaluation_steps": 5, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hassnain/wav2vec2-base-timit-demo-colab50
hassnain
2022-05-01T13:32:25Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T10:57:02Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab50 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2257 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 5.4568 | 7.04 | 500 | 3.3002 | 1.0 | | 3.1795 | 14.08 | 1000 | 3.2170 | 1.0 | | 3.1607 | 21.13 | 1500 | 3.2119 | 1.0 | | 3.1537 | 28.17 | 2000 | 3.2257 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab52
hassnain
2022-05-01T12:59:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T12:14:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab52 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7941 - Wer: 0.7501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3424 | 7.04 | 500 | 3.3225 | 1.0 | | 2.518 | 14.08 | 1000 | 1.5884 | 0.8300 | | 1.0217 | 21.13 | 1500 | 1.6643 | 0.7719 | | 0.6074 | 28.17 | 2000 | 1.7941 | 0.7501 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab30
hassnain
2022-05-01T12:46:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T10:21:09Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab30 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8496 - Wer: 0.6534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.2705 | 14.71 | 500 | 3.1073 | 1.0 | | 1.3631 | 29.41 | 1000 | 0.8496 | 0.6534 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab51
hassnain
2022-05-01T11:59:55Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T11:15:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab51 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab51 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8395 - Wer: 0.7480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.481 | 7.04 | 500 | 3.2834 | 1.0 | | 2.2521 | 14.08 | 1000 | 1.6333 | 0.8093 | | 0.9467 | 21.13 | 1500 | 1.7458 | 0.7560 | | 0.5888 | 28.17 | 2000 | 1.8395 | 0.7480 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
huggingtweets/sandspiel_feed
huggingtweets
2022-05-01T11:28:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-01T10:34:20Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1073861926097117184/FB3bBgcN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sandspiel</div> <div style="text-align: center; font-size: 14px;">@sandspiel_feed</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sandspiel. | Data | sandspiel | | --- | --- | | Tweets downloaded | 3200 | | Retweets | 2 | | Short tweets | 1506 | | Tweets kept | 1692 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fvrcwe0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sandspiel_feed's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24l7h3az) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24l7h3az/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sandspiel_feed') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sameearif88/wav2vec2-base-timit-demo-colab7
sameearif88
2022-05-01T11:12:28Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T10:15:02Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab7 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6917 - Wer: 0.5426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1400 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1854 | 13.89 | 500 | 3.1687 | 1.0 | | 1.7033 | 27.78 | 1000 | 0.7289 | 0.5659 | | 0.4208 | 41.67 | 1500 | 0.6917 | 0.5426 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
huggingtweets/a_ergt-sausifaktai-suuiluap
huggingtweets
2022-05-01T11:05:56Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-05-01T11:05:49Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1512730099614953472/dyaBioOx_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/703268070962372608/sWc1Y_Ch_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/783999503711997952/BHnn3C1Z_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius</div> <div style="text-align: center; font-size: 14px;">@a_ergt-sausifaktai-suuiluap</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius. | Data | Æ𝚐𝚛𝚝 | Sausi Faktai | Pαulius | | --- | --- | --- | --- | | Tweets downloaded | 3241 | 3194 | 3192 | | Retweets | 299 | 19 | 811 | | Short tweets | 977 | 16 | 484 | | Tweets kept | 1965 | 3159 | 1897 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bn9w1ob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @a_ergt-sausifaktai-suuiluap's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/a_ergt-sausifaktai-suuiluap') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sameearif88/wav2vec2-base-timit-demo-colab10
sameearif88
2022-05-01T11:00:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T09:25:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab10 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4460 - Wer: 0.3425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9891 | 3.52 | 500 | 3.1554 | 1.0 | | 1.71 | 7.04 | 1000 | 0.7122 | 0.5811 | | 0.6164 | 10.56 | 1500 | 0.5149 | 0.4880 | | 0.4188 | 14.08 | 2000 | 0.4726 | 0.4344 | | 0.3038 | 17.61 | 2500 | 0.4765 | 0.4092 | | 0.2312 | 21.13 | 3000 | 0.4387 | 0.3765 | | 0.1867 | 24.65 | 3500 | 0.4411 | 0.3583 | | 0.1582 | 28.17 | 4000 | 0.4460 | 0.3425 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab11
hassnain
2022-05-01T10:54:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T09:49:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6269 - Wer: 0.7418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.6439 | 7.04 | 500 | 3.3083 | 1.0 | | 2.3763 | 14.08 | 1000 | 1.5059 | 0.8146 | | 1.0161 | 21.13 | 1500 | 1.5101 | 0.7488 | | 0.6195 | 28.17 | 2000 | 1.6269 | 0.7418 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab7
hassnain
2022-05-01T09:02:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T07:40:34Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab7 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1687 - Wer: 0.6478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8409 | 7.04 | 500 | 3.1487 | 1.0 | | 2.6259 | 14.08 | 1000 | 1.5598 | 0.8730 | | 1.083 | 21.13 | 1500 | 1.0600 | 0.7347 | | 0.6061 | 28.17 | 2000 | 1.0697 | 0.7006 | | 0.4022 | 35.21 | 2500 | 1.0617 | 0.6913 | | 0.2884 | 42.25 | 3000 | 1.1962 | 0.6768 | | 0.225 | 49.3 | 3500 | 1.1753 | 0.6567 | | 0.1852 | 56.34 | 4000 | 1.1687 | 0.6478 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
cuzeverynameistaken/wav2vec2-base-timit-demo-colab0
cuzeverynameistaken
2022-05-01T08:59:37Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T21:06:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab0 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6960 - Wer: 0.5694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3196 | 13.89 | 500 | 3.1225 | 1.0 | | 1.2756 | 27.78 | 1000 | 0.6960 | 0.5694 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
sameearif88/wav2vec2-base-timit-demo-colab4
sameearif88
2022-05-01T08:37:50Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T07:59:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab4 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9149 - Wer: 0.5907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9363 | 13.89 | 500 | 2.7532 | 1.0 | | 0.9875 | 27.78 | 1000 | 0.9149 | 0.5907 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
sherry7144/wav2vec2-base-timit-demo-colab1
sherry7144
2022-05-01T08:08:05Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T07:01:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0358 - Wer: 0.5729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3217 | 13.89 | 500 | 0.8951 | 0.5834 | | 0.2263 | 27.78 | 1000 | 1.0358 | 0.5729 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
shumail/wav2vec2-base-timit-demo-colab
shumail
2022-05-01T07:13:08Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T12:34:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8686 - Wer: 0.6263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0505 | 13.89 | 500 | 3.0760 | 1.0 | | 1.2748 | 27.78 | 1000 | 0.8686 | 0.6263 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab3
hassnain
2022-05-01T07:06:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-01T00:50:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1016 - Wer: 0.6704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0006 | 13.89 | 500 | 3.0706 | 1.0 | | 1.8796 | 27.78 | 1000 | 1.1154 | 0.7414 | | 0.548 | 41.67 | 1500 | 1.0826 | 0.7034 | | 0.2747 | 55.56 | 2000 | 1.1016 | 0.6704 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab1
hassnain
2022-05-01T05:22:37Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T22:09:18Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1904 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:---:| | 5.0877 | 1.42 | 500 | 3.2909 | 1.0 | | 3.1333 | 2.85 | 1000 | 3.2624 | 1.0 | | 3.1335 | 4.27 | 1500 | 3.2121 | 1.0 | | 3.1294 | 5.7 | 2000 | 3.2047 | 1.0 | | 3.1307 | 7.12 | 2500 | 3.2020 | 1.0 | | 3.1279 | 8.55 | 3000 | 3.1978 | 1.0 | | 3.1296 | 9.97 | 3500 | 3.2015 | 1.0 | | 3.1273 | 11.4 | 4000 | 3.1983 | 1.0 | | 3.1273 | 12.82 | 4500 | 3.2258 | 1.0 | | 3.1274 | 14.25 | 5000 | 3.2151 | 1.0 | | 3.1256 | 15.67 | 5500 | 3.2105 | 1.0 | | 3.1302 | 17.09 | 6000 | 3.2018 | 1.0 | | 3.1285 | 18.52 | 6500 | 3.2006 | 1.0 | | 3.1251 | 19.94 | 7000 | 3.1858 | 1.0 | | 3.1283 | 21.37 | 7500 | 3.1829 | 1.0 | | 3.1267 | 22.79 | 8000 | 3.1773 | 1.0 | | 3.1283 | 24.22 | 8500 | 3.1857 | 1.0 | | 3.1253 | 25.64 | 9000 | 3.1847 | 1.0 | | 3.1251 | 27.07 | 9500 | 3.1832 | 1.0 | | 3.1245 | 28.49 | 10000 | 3.1869 | 1.0 | | 3.1225 | 29.91 | 10500 | 3.1904 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ouyh18/distilbert-base-uncased-finetuned-cola
ouyh18
2022-05-01T03:43:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-01T02:34:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5500173690801187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8456 - Matthews Correlation: 0.5500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5197 | 1.0 | 535 | 0.5477 | 0.4130 | | 0.3456 | 2.0 | 1070 | 0.5035 | 0.5239 | | 0.2342 | 3.0 | 1605 | 0.6100 | 0.5285 | | 0.1698 | 4.0 | 2140 | 0.7556 | 0.5456 | | 0.1295 | 5.0 | 2675 | 0.8456 | 0.5500 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
charlieoneill/distilbert-base-uncased-finetuned-tweet_eval-offensive
charlieoneill
2022-05-01T03:36:21Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-01T03:22:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-tweet_eval-offensive results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: offensive metrics: - name: Accuracy type: accuracy value: 0.8089123867069486 - name: F1 type: f1 value: 0.8060281168230459 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tweet_eval-offensive This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.4185 - Accuracy: 0.8089 - F1: 0.8060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 187 | 0.4259 | 0.8059 | 0.7975 | | 0.46 | 2.0 | 374 | 0.4185 | 0.8089 | 0.8060 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
princeton-nlp/CoFi-MNLI-s95
princeton-nlp
2022-05-01T01:20:45Z
15
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T21:57:29Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
princeton-nlp/CoFi-MNLI-s60
princeton-nlp
2022-05-01T01:20:27Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T21:58:04Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
princeton-nlp/CoFi-SST2-s95
princeton-nlp
2022-05-01T01:19:38Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:2204.00408", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-29T21:58:56Z
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset SST-2. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
tahazakir/wav2vec2-base-timit-demo-colab2
tahazakir
2022-04-30T22:54:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T20:32:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1899 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.0486 | 13.89 | 500 | 3.6570 | 1.0 | | 3.2905 | 27.78 | 1000 | 3.1899 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
moaiz237/wav2vec2-base-timit-moaiz_explast
moaiz237
2022-04-30T22:11:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T21:18:59Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-moaiz_explast results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-moaiz_explast This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6714 - Wer: 0.5404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.034 | 13.89 | 500 | 1.0507 | 0.6871 | | 0.6024 | 27.78 | 1000 | 0.6714 | 0.5404 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
LiYuan/amazon-review-sentiment-analysis
LiYuan
2022-04-30T22:03:23Z
4,927
41
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T20:37:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli-amazon-query-shopping This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment?text=I+like+you.+I+love+you) on an [Amazon US Customer Reviews Dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset). The code for the fine-tuning process can be found [here](https://github.com/vanderbilt-data-science/bigdata/blob/main/06-fine-tune-BERT-on-our-dataset.ipynb). This model is uncased: it does not make a difference between english and English. It achieves the following results on the evaluation set: - Loss: 0.5202942490577698 - Accuracy: 0.8 ## Model description This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5). This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks. We replaced its head with our customer reviews to fine-tune it on 17,280 rows of training set while validating it on 4,320 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 2,400 rows. ## Intended uses & limitations Bert-base is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of BERT-base is used to predict review rating star given the review. The limitations are this trained model is focusing on reviews and products on Amazon. If you apply this model to other domains, it may perform poorly. ## How to use You can use this model directly by downloading the trained weights and configurations like the below code snippet: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-review-sentiment-analysis") model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-review-sentiment-analysis") ``` ## Training and evaluation data Download all the raw [dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset) from the Kaggle website. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.555400 | 1.0 | 1080 | 0.520294 | 0.800000 | | 0.424300 | 2.0 | 1080 | 0.549649 | 0.798380 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ChrisZeng/t5-base-detox
ChrisZeng
2022-04-30T21:53:04Z
9
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-30T17:43:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-detox results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-detox This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.337 | 1.0 | 135 | 0.4810 | | 0.5238 | 2.0 | 270 | 0.3886 | | 0.4301 | 3.0 | 405 | 0.3378 | | 0.3755 | 4.0 | 540 | 0.3122 | | 0.3359 | 5.0 | 675 | 0.2910 | | 0.3068 | 6.0 | 810 | 0.2737 | | 0.2861 | 7.0 | 945 | 0.2710 | | 0.2744 | 8.0 | 1080 | 0.2617 | | 0.2649 | 9.0 | 1215 | 0.2630 | | 0.2585 | 10.0 | 1350 | 0.2615 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.12.0.dev20220429 - Datasets 2.1.0 - Tokenizers 0.10.3
hassnain/wav2vec2-base-timit-demo-colab
hassnain
2022-04-30T20:20:34Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-29T14:46:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
sherry7144/wav2vec2-base-timit-demo-colab0
sherry7144
2022-04-30T20:04:12Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T15:52:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab0 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0395 - Wer: 0.5635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3976 | 13.89 | 500 | 0.8616 | 0.5968 | | 0.2637 | 27.78 | 1000 | 0.9973 | 0.5826 | | 0.1794 | 41.67 | 1500 | 1.0395 | 0.5635 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
moaiz237/wav2vec2-base-timit-moaiz_exp2_new
moaiz237
2022-04-30T20:03:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T19:19:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-moaiz_exp2_new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-moaiz_exp2_new This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6849 - Wer: 0.5396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.1266 | 13.89 | 500 | 1.0233 | 0.7034 | | 0.5928 | 27.78 | 1000 | 0.6849 | 0.5396 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ahmad573/wav2vec2-base-timit-demo-colab2
ahmad573
2022-04-30T19:12:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T15:19:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1914 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 700 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.8196 | 7.04 | 500 | 3.2201 | 1.0 | | 3.1517 | 14.08 | 1000 | 3.1876 | 1.0 | | 3.1493 | 21.13 | 1500 | 3.1837 | 1.0 | | 3.1438 | 28.17 | 2000 | 3.1914 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ali221000262/wav2vec2-base-timit-ali-hasan-colab-EX2
ali221000262
2022-04-30T19:02:59Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T17:42:47Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-ali-hasan-colab-EX2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-ali-hasan-colab-EX2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5087 - Wer: 0.4458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1956 | 13.89 | 500 | 0.5087 | 0.4458 | | 0.1946 | 27.78 | 1000 | 0.5087 | 0.4458 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
julycodes/wav2vec2-base-timit-demo-colab-2
julycodes
2022-04-30T18:57:05Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T15:53:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab-2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7429 - Wer: 0.5080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 900 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.984 | 8.77 | 500 | 0.9028 | 0.7036 | | 0.6412 | 17.54 | 1000 | 0.7275 | 0.5868 | | 0.3073 | 26.32 | 1500 | 0.7429 | 0.5080 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ParanoidAndroid/bert-finetuned-squad
ParanoidAndroid
2022-04-30T18:29:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-04-30T18:16:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ali221000262/wav2vec2-base-timit-demo-colab
ali221000262
2022-04-30T18:01:43Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T13:26:28Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [ali221000262/wav2vec2-base-timit-demo-colab](https://huggingface.co/ali221000262/wav2vec2-base-timit-demo-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2161 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 2.6432 | 13.89 | 500 | 3.2161 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
tahazakir/wav2vec2-base-timit-demo-colab0
tahazakir
2022-04-30T18:01:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T15:37:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab0 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8768 - Wer: 0.6089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1121 | 13.89 | 500 | 2.9931 | 1.0 | | 1.1475 | 27.78 | 1000 | 0.8768 | 0.6089 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ali221000262/wav2vec2-base-timit-ali-hasan-colab
ali221000262
2022-04-30T17:36:34Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T17:04:44Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-ali-hasan-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-ali-hasan-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2471 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.5485 | 13.89 | 500 | 3.2471 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ningkko/drug-stance-bert
ningkko
2022-04-30T17:29:17Z
13
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-17T21:05:00Z
--- tags: - generated_from_trainer model-index: - name: drug-stance-bert results: [1, 0, 2] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # drug-stance-bert This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on [COVID-CQ](https://github.com/eceveco/COVID-CQ), a dataset that contains 3-label annotated opinions (negative, neutral, and positive) of the tweet initiators regarding the use of Chloroquine or Hydroxychloroquine for the treatment or prevention of the coronavirus. ## Intended uses & limitations Predict opinions (negative, neutral, and positive) of tweet initiators regarding the use of a drug for the treatment or prevention of the coronavirus. Note that having multiple drug names with different stances in a single tweet can confuse the model. ## Inference & understanding We followed COVID-CQ to use the following label representation: - 0 -> None/Neutral; - 1 -> Against; - 2 -> Favor Try these examples: - The gov's killing people by banning Ivm - Great news cheers everybody:) ivermectin proven to not work by rct lol ## Tutorial See our Github repo for [inference scripts](https://github.com/ningkko/COVID-drug/blob/main/stance_detection/inference.ipynb) ## Model description "We developed two COVID-drug-stance RoBERTa-base models by fine-tuning a pre-trained Twitter-specific stance detection model on a stance data set called COVID-CQ. The data were divided into training-dev-test validation datasets with a 70:10:20 ratio. Model I (COVID-drug-stance-BERT) was trained on the original tweet data, and Model II (COVID-drug-stance-BERT-masked) was trained on tweets with drug names masked as “[mask]” for model generalizability on different drugs. The two models had similar performance on the COVID-19 validation set: COVID-drug-stance-BERT had an accuracy of 86.88%, and the masked model had an accuracy of 86.67%. The two models were then evaluated by predicting tweet initiators’ attitudes towards the drug mentioned in each tweet using randomly selected test sets (100 tweets) of each drug (Hydroxychloquine, Ivermectin, Molnupiravir, Remdesivir). As suggested by the evaluation in Table 2, Model I had better performance and was therefore used in this study". | **Drug** | **Model I: Original Tweet** | | | **Model II: Drug Names Masked** | | | |------------------------|:---------------------------:|:-----------:|:------------:|:-------------------------------:|:-----------:|:------------:| | | **Precision** | **Recall** | **F1-Score** | **Precision** | **Recall** | **F1-Score** | | **Hydroxychloroquine** | 0.93 | 0.92 | **0.92** | 0.84 | 0.83 | 0.83 | | **Ivermectin** | 0.92 | 0.91 | **0.91** | 0.72 | 0.68 | 0.68 | | **Molnupiravir** | 0.89 | 0.89 | **0.89** | 0.78 | 0.77 | 0.77 | | **Remdesivir** | 0.82 | 0.79 | **0.79** | 0.70 | 0.66 | 0.66 | The model uploaded here is Model I. ## Training and evaluation data COVID-CQ ## Training procedure See Github ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.11.0 - Pytorch 1.8.1+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
moaiz237/wav2vec2-base-timit-moaiz_exp1
moaiz237
2022-04-30T15:13:12Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T12:17:17Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-moaiz_exp1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-moaiz_exp1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6910 - Wer: 0.5549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7261 | 13.89 | 500 | 2.4864 | 0.9942 | | 1.0036 | 27.78 | 1000 | 0.6910 | 0.5549 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
maxime7770/model
maxime7770
2022-04-30T15:12:40Z
5
0
transformers
[ "transformers", "tf", "camembert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-29T11:54:14Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: maxime7770/model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # maxime7770/model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1211 - Validation Loss: 0.4812 - Epoch: 49 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 650, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5966 | 1.5898 | 0 | | 1.5577 | 1.5576 | 1 | | 1.5034 | 1.4761 | 2 | | 1.4034 | 1.3538 | 3 | | 1.2864 | 1.2163 | 4 | | 1.1502 | 1.0980 | 5 | | 1.0085 | 0.9988 | 6 | | 0.8828 | 0.9130 | 7 | | 0.7863 | 0.8445 | 8 | | 0.7036 | 0.7871 | 9 | | 0.6322 | 0.7399 | 10 | | 0.5731 | 0.7030 | 11 | | 0.5180 | 0.6714 | 12 | | 0.4757 | 0.6432 | 13 | | 0.4366 | 0.6204 | 14 | | 0.4057 | 0.6006 | 15 | | 0.3743 | 0.5827 | 16 | | 0.3475 | 0.5689 | 17 | | 0.3221 | 0.5577 | 18 | | 0.2971 | 0.5467 | 19 | | 0.2815 | 0.5372 | 20 | | 0.2700 | 0.5297 | 21 | | 0.2521 | 0.5225 | 22 | | 0.2343 | 0.5168 | 23 | | 0.2265 | 0.5117 | 24 | | 0.2143 | 0.5074 | 25 | | 0.2063 | 0.5038 | 26 | | 0.1941 | 0.5001 | 27 | | 0.1843 | 0.4976 | 28 | | 0.1782 | 0.4949 | 29 | | 0.2012 | 0.4938 | 30 | | 0.1691 | 0.4930 | 31 | | 0.1626 | 0.4910 | 32 | | 0.1884 | 0.4886 | 33 | | 0.1547 | 0.4870 | 34 | | 0.1492 | 0.4858 | 35 | | 0.1445 | 0.4850 | 36 | | 0.1415 | 0.4842 | 37 | | 0.1383 | 0.4836 | 38 | | 0.1374 | 0.4832 | 39 | | 0.1336 | 0.4826 | 40 | | 0.1322 | 0.4823 | 41 | | 0.1295 | 0.4820 | 42 | | 0.1268 | 0.4818 | 43 | | 0.1261 | 0.4816 | 44 | | 0.1253 | 0.4815 | 45 | | 0.1275 | 0.4814 | 46 | | 0.1247 | 0.4812 | 47 | | 0.1256 | 0.4812 | 48 | | 0.1211 | 0.4812 | 49 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Davincilee/door_inner
Davincilee
2022-04-30T15:07:38Z
0
1
null
[ "region:us" ]
null
2022-04-30T14:47:04Z
language: - "List of ISO 639-1 code for your language"
Muennighoff/t5-small-finetuned-xsum
Muennighoff
2022-04-30T14:26:40Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-30T14:15:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 28.2881 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4784 - Rouge1: 28.2881 - Rouge2: 7.6834 - Rougel: 22.2163 - Rougelsum: 22.219 - Gen Len: 18.8292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.7184 | 1.0 | 12753 | 2.4784 | 28.2881 | 7.6834 | 22.2163 | 22.219 | 18.8292 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
sameearif88/wav2vec2-base-timit-demo-colab
sameearif88
2022-04-30T13:08:28Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-26T10:31:51Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
adielsa/distilbert-base-uncased-finetuned-cola
adielsa
2022-04-30T12:37:50Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-30T12:16:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5387376669923544 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8256 - Matthews Correlation: 0.5387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5257 | 1.0 | 535 | 0.5286 | 0.4093 | | 0.3447 | 2.0 | 1070 | 0.5061 | 0.4972 | | 0.2303 | 3.0 | 1605 | 0.5878 | 0.5245 | | 0.1761 | 4.0 | 2140 | 0.7969 | 0.5153 | | 0.1346 | 5.0 | 2675 | 0.8256 | 0.5387 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
ai4bharat/MultiIndicSentenceSummarizationSS
ai4bharat
2022-04-30T10:35:01Z
6
1
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "sentence-summarization", "multilingual", "nlp", "indicnlp", "as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te", "dataset:ai4bharat/IndicSentenceSummarization", "arxiv:2203.05437", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-23T17:54:14Z
--- tags: - sentence-summarization - multilingual - nlp - indicnlp datasets: - ai4bharat/IndicSentenceSummarization language: - as - bn - gu - hi - kn - ml - mr - or - pa - ta - te license: - mit widget: - जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। <s> <2hi> --- # MultiIndicSentenceSummarizationSS This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicSentenceSummarization](https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization) dataset. For finetuning details, see the [paper](https://arxiv.org/abs/2203.05437). <ul> <li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li> <li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li> <li> Trained on large Indic language corpora (5.53 million sentences). </li> <li> Unlike <a href="https://huggingface.co/ai4bharat/MultiIndicSentenceSummarization">MultiIndicSentenceSummarization</a> each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li> </ul> ## Using this model in `transformers` ``` from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM from transformers import AlbertTokenizer, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True) # Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True) model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS") # Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS") # Some initial mapping bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>") eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>") pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>") # To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>'] # First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>". inp = tokenizer("जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # For generation. Pardon the messiness. Note the decoder_start_token_id. model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3, num_beams=5, length_penalty=0.8, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>")) # Decode to get output strings decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(decoded_output) # अनंतनाग में सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादी ढेर ``` ## Benchmarks Scores on the `IndicSentenceSummarization` test sets are as follows: Language | Rouge-1 / Rouge-2 / Rouge-L ---------|---------------------------- as | 63.56 / 49.90 / 62.57 bn | 52.52 / 36.15 / 50.60 gu | 47.69 / 29.77 / 45.61 hi | 50.43 / 28.13 / 45.15 kn | 77.06 / 69.36 / 76.33 ml | 65.00 / 51.99 / 63.76 mr | 47.05 / 25.97 / 45.52 or | 50.96 / 30.32 / 49.23 pa | 54.95 / 36.26 / 51.26 ta | 58.52 / 38.36 / 56.49 te | 53.75 / 35.17 / 52.66 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{Kumar2022IndicNLGSM, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, url = "https://arxiv.org/abs/2203.05437" } ```
DrishtiSharma/TEST123
DrishtiSharma
2022-04-30T10:24:56Z
0
0
null
[ "tflite", "mixtec", "region:us" ]
null
2022-04-30T10:11:52Z
--- tags: - mixtec # See a list of available tags here: # https://coqui.ai/mixtec/jemeyer/v1.0.0#model-details # task: Speech-to-Text for the Yoloxóchitl Mixtec Language on 16kHz, mono-channel audio --- # Model card for Yoloxóchitl Mixtec STT Jump to section: - [Model details](#model-details) - [Intended use](#intended-use) - [Performance Factors](#performance-factors) - [Metrics](#metrics) - [Training data](#training-data) - [Evaluation data](#evaluation-data) - [Ethical considerations](#ethical-considerations) - [Caveats and recommendations](#caveats-and-recommendations) ## Model details - Person or organization developing model: Originally trained by [Joe Meyer](https://www.linkedin.com/in/joe-meyer-25753951/). - Model language: Yoloxóchitl Mixtec / / `xty` - Model date: April 17, 2022 - Model type: `Speech-to-Text` - Model version: `v0.1.0` - Compatible with 🐸 STT version: `v1.0.0` - License: CC BY-NC-SA 3.0 - Citation details: `@techreport{xty-stt, author = {Meyer,Joe}, title = {Yoloxóchitl Mixtec STT 0.1}, institution = {Coqui}, address = {\url{https://github.com/coqui-ai/STT-models}} year = {2022}, month = {April}, number = {STT-SLR89-XTY-0.1} }` - Where to send questions or comments about the model: You can leave an issue on [`STT-model` issues](https://github.com/coqui-ai/STT-models/issues), open a new discussion on [`STT-model` discussions](https://github.com/coqui-ai/STT-models/discussions), or chat with us on [Gitter](https://gitter.im/coqui-ai/). ## Intended use Speech-to-Text for the [Yoloxóchitl Mixtec Language](https://en.wikipedia.org/wiki/Yolox%C3%B3chitl_Mixtec) on 16kHz, mono-channel audio. ## Performance Factors Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data). ## Metrics STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk. #### Transcription Accuracy The following Word Error Rates and Character Error Rates are reported for a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files. |Test Corpus|WER|CER| |-----------|---|---| |OpenSLR|48.85\%|18.04\%| #### Real-Time Factor Real-Time Factor (RTF) is defined as `processing-time / length-of-audio`. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF. Recorded average RTF on laptop CPU: `` #### Model Size `model.pbmm`: M `model.tflite`: M ### Approaches to uncertainty and variability Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio. ## Training data This model was trained on a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files. ## Evaluation data This model was evaluated on a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files. ## Ethical considerations Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use. ### Demographic Bias You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue. ### Surveillance Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech. ## Caveats and recommendations Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data). In most applications, it is recommended that you [train your own language model](https://stt.readthedocs.io/en/latest/LANGUAGE_MODEL.html) to improve transcription accuracy on your speech data.
moaiz237/wav2vec2-base-timit-demo-colab
moaiz237
2022-04-30T07:51:57Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-30T00:22:12Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4769 - Wer: 0.4305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.2022 | 13.89 | 500 | 2.9267 | 0.9995 | | 0.834 | 27.78 | 1000 | 0.4769 | 0.4305 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
huggingtweets/itstomrobinson
huggingtweets
2022-04-30T07:06:15Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-04-30T06:45:28Z
--- language: en thumbnail: http://www.huggingtweets.com/itstomrobinson/1651302371165/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1388470365723168770/irz46Ykl_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tom Robinson</div> <div style="text-align: center; font-size: 14px;">@itstomrobinson</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Tom Robinson. | Data | Tom Robinson | | --- | --- | | Tweets downloaded | 733 | | Retweets | 40 | | Short tweets | 52 | | Tweets kept | 641 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bluc7sk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itstomrobinson's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/itstomrobinson') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dropout05/t5-realnewslike-super-tiny
dropout05
2022-04-30T01:35:53Z
4
1
transformers
[ "transformers", "jax", "t5", "text2text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-14T01:34:38Z
--- license: apache-2.0 --- **Don't use this model for any applied task. It too small to be practically useful. It is just a part of a weird research project.** An extremely small version of T5 with these parameters ```python "d_ff": 1024, "d_kv": 64, "d_model": 256, "num_heads": 4, "num_layers": 1, # yes, just one layer ``` The model was pre-trained on `realnewslike` subset of C4 for 1 epoch with sequence length `64`. Corresponding WandB run: [click](https://wandb.ai/guitaricet/t5-lm/runs/2yvuxsfz?workspace=user-guitaricet).
tonydiana1/distilroberta-base-finetuned-wikitext2
tonydiana1
2022-04-30T01:23:18Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-04-30T01:01:59Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0853 | 1.0 | 2406 | 1.9214 | | 1.986 | 2.0 | 4812 | 1.8799 | | 1.9568 | 3.0 | 7218 | 1.8202 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Siddhart/t5-small-finetuned-xsum
Siddhart
2022-04-30T00:04:50Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-29T23:51:32Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 23 | 2.7230 | 33.2094 | 14.0331 | 28.4433 | 29.4644 | 18.8947 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
stas/tiny-m2m_100
stas
2022-04-29T23:57:25Z
1,370
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "testing", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-29T23:50:29Z
--- language: - en thumbnail: tags: - testing license: apache-2.0 --- # Tiny M2M100 model This is a tiny model that is used in the `transformers` test suite. It doesn't do anything useful beyond functional testing. Do not try to use it for anything that requires quality. The model is indeed 4MB in size. You can see how it was created [here](https://huggingface.co/stas/tiny-m2m_100/blob/main/m2m-make-tiny-model.py) If you're looking for the real model, please go to [https://huggingface.co/facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M).
csikasote/xlsr-53-bemba-5hrs
csikasote
2022-04-29T23:40:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-04-29T21:24:54Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: xlsr-53-bemba-5hrs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr-53-bemba-5hrs This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3414 - Wer: 0.4867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2701 | 2.16 | 400 | 0.4047 | 0.6230 | | 0.488 | 4.32 | 800 | 0.3002 | 0.4917 | | 0.2807 | 6.49 | 1200 | 0.3342 | 0.4802 | | 0.1696 | 8.65 | 1600 | 0.3414 | 0.4867 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Percival/finetuning-sentiment-model-3000-samples
Percival
2022-04-29T22:52:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-04-29T22:34:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
doc2query/msmarco-vietnamese-mt5-base-v1
doc2query
2022-04-29T22:06:03Z
18
4
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "vi", "dataset:unicamp-dl/mmarco", "arxiv:1904.08375", "arxiv:2104.08663", "arxiv:2112.07577", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-04-29T22:05:47Z
--- language: vi datasets: - unicamp-dl/mmarco widget: - text: "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc." license: apache-2.0 --- # doc2query/msmarco-vietnamese-mt5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'doc2query/msmarco-vietnamese-mt5-base-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad(): # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 ) # Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. ## Training This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
espnet/turkish_commonvoice_blstm
espnet
2022-04-29T21:33:48Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "tr", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-04-29T21:32:59Z
--- tags: - espnet - audio - automatic-speech-recognition language: tr datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/turkish_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/turkish_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sat Apr 16 17:16:06 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b` - Commit date: `Mon Apr 4 21:04:45 2022 -0400` ## asr_tr_50_epoch_lr_0.1 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_tr|8339|43647|78.5|19.6|2.0|1.6|23.1|50.9| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_tr|8339|306849|94.3|3.2|2.5|1.1|6.8|50.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.ave/test_tr|8339|203431|91.0|5.8|3.2|1.3|10.3|50.6| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn_tr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_tr_50_epoch_lr_0.1 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_tr_bpe150_sp/train/speech_shape - exp/asr_stats_raw_tr_bpe150_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_tr_bpe150_sp/valid/speech_shape - exp/asr_stats_raw_tr_bpe150_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_tr_sp/wav.scp - speech - sound - - dump/raw/train_tr_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_tr/wav.scp - speech - sound - - dump/raw/dev_tr/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - R - K - E - . - I - N - L - ı - A - M - T - U - Y - S - Z - ş - ü - O - ▁A - ç - DI - MA - IN - ▁BU - LA - ',' - H - RA - LAR - ▁BIR - DE - ME - ö - '?' - Dı - DA - AN - ▁KA - LI - LER - F - LE - EN - P - B - V - DU - YE - UN - ▁G - TE - ▁BE - BI - YA - KI - Tı - BA - ▁OL - TI - ▁DE - ▁HA - ▁YA - ıN - AR - IM - Sı - D - Lı - ER - C - ▁S - NA - üN - IYOR - ▁NE - ▁I - ▁O - ▁SA - ▁" - ▁DA - SI - G - ▁P - TA - ▁SE - ▁VE - KA - '''' - UM - DEN - ▁GE - Dü - ." - ıYOR - ▁TA - '!' - CE - VA - ▁HE - UZ - GI - ıNDA - ıNı - ▁MI - LAN - ▁BAş - ▁ON - CA - İ - DAN - SIN - '...' - ▁DO - ▁GöR - ▁KO - ▁VAR - ACAK - ▁GEL - ▁YAP - ▁SON - ▁ET - ▁IKI - Ç - Ş - '"' - J - Ö - ':' - â - Ü - ; - '-' - W - X - ’ - ” - ‘ - î - ë - Q - ( - Â - û - “ - ) - ğ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/tr_token_list/bpe_unigram150/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_tr_bpe150_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/french_commonvoice_blstm
espnet
2022-04-29T21:22:54Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "fr", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-04-29T21:22:08Z
--- tags: - espnet - audio - automatic-speech-recognition language: fr datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/french_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/french_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Fri Apr 29 17:20:37 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b` - Commit date: `Thu Apr 28 19:50:59 2022 -0400` ## asr_train_asr_rnn_raw_fr_bpe350_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_fr|15621|151227|75.1|22.6|2.3|2.3|27.2|81.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_fr|15621|952803|92.9|3.6|3.5|2.0|9.1|81.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_fr|15621|730898|89.9|6.5|3.6|1.9|12.0|81.0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_rnn_raw_fr_bpe350_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_fr_bpe350_sp/train/speech_shape - exp/asr_stats_raw_fr_bpe350_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_fr_bpe350_sp/valid/speech_shape - exp/asr_stats_raw_fr_bpe350_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_fr_sp/wav.scp - speech - sound - - dump/raw/train_fr_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_fr/wav.scp - speech - sound - - dump/raw/dev_fr/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - S - ▁ - E - I - T - A - U - O - . - L - R - é - P - C - V - 'ON' - M - ▁DE - ',' - N - ▁S - D - IN - '''' - OU - ▁D - G - IS - ▁P - ER - ▁C - ▁L - ▁LA - B - ▁" - ▁A - RE - AN - ." - ▁M - ▁F - '-' - F - ▁T - ES - ENT - ▁LE - EN - IT - LE - ▁N - è - H - ’ - Y - X - Z - K - J - ê - '?' - '!' - É - ç - W - à - ô - â - Q - î - À - '"' - œ - û - ù - ï - ':' - ; - — - È - « - » - Ç - Ê - ë - á - ü - í - ö - ó - ) - Î -  - ō - ä - – - Ô - ć - š - '&' - ñ - '=' - ł - č - Û - ú - ū - ø - ā - ã - ă - / - ń - _ - ș - å - æ - ° - ß - “ - ” - ž - ı - Œ - Ö - ř - Š - ý - Ō - ‘ - ş - · - o - ę - ÿ - Å - ą - ð - ī - ò - ż - ě - ś - '`' - Ë - ì - ē - ğ - İ - '*' - Í - ė - Ó - ő - đ - ʻ - Ü - õ - Ä - ņ - ṣ - '|' - ʾ - π - Ā - σ - '%' - ả - κ - ʼ - ň - Ú - ļ - ư - '1' - '2' - '}' - ĩ - Ҫ - ا - ầ - ⁄ - ṇ - þ - ǎ - ο - ′ - s - § - ľ - ǹ - Ʉ - ː - ̱ - γ - ν - ن - ạ - ễ - ộ - ≥ - 星 - ề - ṯ - τ - δ - Δ - Ț - Ș - Ū - Ř - ∆ - → - ệ - Г - ơ - ţ - Þ - Ñ - ± - ť - ŏ - € - „ - ʿ - Ć - £ - α - Ż - Ş - β - ź - Đ - Ø - Ś - Ž - Æ - $ - Ï - Ł - ț - Č - Á - ́ - Ù - Μ - ι - ρ - ό - И - з - 京 - 北 - ď - Ġ - Ṭ - − - ☉ - '~' - ® - Ì - Ò - Õ - × - ħ - ĺ - Ľ - ũ - ů - Ų - ǃ - ǔ - ̠ - ̲ - Κ - Π - ε - ζ - μ - ς - υ - ψ - І - Ј - А - Е - П - а - е - м - н - Գ - Զ - ب - د - ر - ل - و - ي - ወ - ደ - ḍ - ṅ - ṭ - ậ - ắ - ẵ - ị - ồ - ờ - ợ - ủ - ‐ - ― - † - ‹ - › - ₽ - ∈ - ∞ - ─ - い - う - た - つ - へ - ま - め - や - ゔ - 扬 - 术 - 美 - 貴 - 青 - 馆 - Ꝑ - ̐ - Ω - ử - ỳ - ∨ - 乃 - 杜 - ( - Ē - ǫ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/fr_token_list/bpe_unigram350/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_fr_bpe350_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
timhbach/Team_Gryffindor_NER
timhbach
2022-04-29T21:13:30Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-04-11T07:08:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Team_Gryffindor_NER results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Team-Gryffindor-distilbert-base-finetuned-NER-creditcardcontract-100epoch This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the Credit card agreement dataset. It achieves the following results on the evaluation set: - Loss: 0.0470 - Precision: 0.7319 - Recall: 0.7064 - F1: 0.7190 - Accuracy: 0.9920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0113 | 0.33 | 500 | 0.0443 | 0.6547 | 0.7028 | 0.6779 | 0.9908 | | 0.0118 | 0.67 | 1000 | 0.0435 | 0.7207 | 0.6440 | 0.6802 | 0.9916 | | 0.013 | 1.0 | 1500 | 0.0449 | 0.7113 | 0.6826 | 0.6966 | 0.9918 | | 0.0113 | 1.34 | 2000 | 0.0434 | 0.7213 | 0.6697 | 0.6946 | 0.9915 | | 0.0121 | 1.67 | 2500 | 0.0467 | 0.6955 | 0.6789 | 0.6871 | 0.9914 | | 0.0125 | 2.01 | 3000 | 0.0417 | 0.7095 | 0.6991 | 0.7043 | 0.9920 | | 0.0106 | 2.34 | 3500 | 0.0437 | 0.7191 | 0.6624 | 0.6896 | 0.9918 | | 0.0114 | 2.68 | 4000 | 0.0468 | 0.7165 | 0.6679 | 0.6914 | 0.9920 | | 0.0125 | 3.01 | 4500 | 0.0431 | 0.6888 | 0.6862 | 0.6875 | 0.9917 | | 0.0107 | 3.35 | 5000 | 0.0446 | 0.7184 | 0.6459 | 0.6802 | 0.9913 | | 0.0096 | 3.68 | 5500 | 0.0485 | 0.6926 | 0.6532 | 0.6723 | 0.9912 | | 0.013 | 4.02 | 6000 | 0.0448 | 0.6134 | 0.6697 | 0.6404 | 0.9907 | | 0.0102 | 4.35 | 6500 | 0.0497 | 0.6895 | 0.6642 | 0.6766 | 0.9913 | | 0.0112 | 4.69 | 7000 | 0.0464 | 0.6759 | 0.6697 | 0.6728 | 0.9910 | | 0.0117 | 5.02 | 7500 | 0.0484 | 0.7451 | 0.6275 | 0.6813 | 0.9916 | | 0.0114 | 5.36 | 8000 | 0.0411 | 0.7086 | 0.6826 | 0.6953 | 0.9919 | | 0.0108 | 5.69 | 8500 | 0.0443 | 0.7041 | 0.6679 | 0.6855 | 0.9916 | | 0.0109 | 6.03 | 9000 | 0.0470 | 0.7228 | 0.6697 | 0.6952 | 0.9916 | | 0.0099 | 6.36 | 9500 | 0.0471 | 0.7253 | 0.6881 | 0.7062 | 0.9913 | | 0.0103 | 6.7 | 10000 | 0.0430 | 0.6986 | 0.7101 | 0.7043 | 0.9914 | | 0.0117 | 7.03 | 10500 | 0.0462 | 0.7327 | 0.6991 | 0.7155 | 0.9918 | | 0.0098 | 7.37 | 11000 | 0.0483 | 0.6910 | 0.6771 | 0.6840 | 0.9914 | | 0.0107 | 7.7 | 11500 | 0.0468 | 0.7189 | 0.6899 | 0.7041 | 0.9916 | | 0.0119 | 8.04 | 12000 | 0.0434 | 0.6970 | 0.6881 | 0.6925 | 0.9918 | | 0.0112 | 8.37 | 12500 | 0.0469 | 0.7007 | 0.6917 | 0.6962 | 0.9918 | | 0.011 | 8.71 | 13000 | 0.0469 | 0.6736 | 0.6514 | 0.6623 | 0.9914 | | 0.0101 | 9.04 | 13500 | 0.0451 | 0.6691 | 0.6606 | 0.6648 | 0.9913 | | 0.0099 | 9.38 | 14000 | 0.0462 | 0.7006 | 0.6826 | 0.6914 | 0.9918 | | 0.0107 | 9.71 | 14500 | 0.0444 | 0.6840 | 0.6752 | 0.6796 | 0.9915 | | 0.0118 | 10.05 | 15000 | 0.0457 | 0.7015 | 0.6771 | 0.6891 | 0.9918 | | 0.0102 | 10.38 | 15500 | 0.0500 | 0.7413 | 0.6679 | 0.7027 | 0.9919 | | 0.0107 | 10.72 | 16000 | 0.0470 | 0.7319 | 0.7064 | 0.7190 | 0.9920 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
espnet/german_commonvoice_blstm
espnet
2022-04-29T21:11:06Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "de", "dataset:commonvoice", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-04-05T01:07:06Z
--- tags: - espnet - audio - automatic-speech-recognition language: de datasets: - commonvoice license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/german_commonvoice_blstm` This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b pip install -e . cd egs2/commonvoice/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/german_commonvoice_blstm ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Apr 4 16:41:54 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `fa1b865352475b744c37f70440de1cc6b257ba70` - Commit date: `Wed Feb 16 16:42:36 2022 -0500` ## asr_de_blstm_specaug_num_time_mask_2_lr_0.1 ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_de|15341|137512|80.0|18.0|2.0|2.5|22.5|69.9| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_de|15341|959619|94.6|3.0|2.3|1.5|6.8|69.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_rnn_asr_model_valid.acc.best/test_de|15341|974965|94.7|3.0|2.3|1.5|6.7|69.9| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_rnn.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_de_blstm_specaug_num_time_mask_2_lr_0.1 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: 3 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: - 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 30 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_de_bpe204_sp/train/speech_shape - exp/asr_stats_raw_de_bpe204_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_de_bpe204_sp/valid/speech_shape - exp/asr_stats_raw_de_bpe204_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_de_sp/wav.scp - speech - sound - - dump/raw/train_de_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev_de/wav.scp - speech - sound - - dump/raw/dev_de/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adadelta optim_conf: lr: 0.1 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ▁ - T - S - E - I - R - M - A - N - L - U - D - . - O - H - B - G - F - Z - K - P - ü - W - ',' - ä - V - ö - J - '?' - ß - '-' - Y - C - '!' - '"' - X - Q - “ - Ä - Ö - '''' - ':' - ’ - – - é - ; - í - á - ó - ō - ã - š - » - « - ú - ‘ - ł - ş - ă - ř - ʻ - '&' - à - ø - č - ı - É - ý - â - ô - ū - ñ - ā - ë - ž - '@' - / - ʿ - ě - ī - ” - ə - å - ń - ′ - æ - ň - ś - ð - ą - ė - Œ - Ç - ( - ) - ò - đ - î - '=' - − - ů - Ú - и - ġ - а - ę - › - ṣ - '`' - ì - õ - ď - ť - ả - — - ‹ - œ - ő - û - ế - ф - р - о - м - е - в - С - Ḫ - ź - Î - Æ - Ż - Ś - ï - Ó - Ř - ğ - Ł - İ - Đ - Ž - Ş - ț - ê - Á - Ō - ́ - Š - Č - ć - ‚ - ș - „ - + - Ø - μ - ‐ - $ - '[' - ']' - ¡ -  - Í - Ô - ù - ē - Ħ - Ī - ņ - ŏ - ż - ǐ - О - Ш - к - ч - ш - ་ - ན - ṟ - ṭ - ạ - ắ - ễ - ộ - ‟ - ≡ - ⟨ - ⟩ - カ - 临 - 孙 - 尣 - 支 - 無 - 臣 - → - À - 道 - Ü - Þ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.5 use_preprocessor: true token_type: bpe bpemodel: data/de_token_list/bpe_unigram204/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_de_bpe204_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: vgg_rnn encoder_conf: rnn_type: lstm bidirectional: true use_projection: true num_layers: 4 hidden_size: 1024 output_size: 1024 postencoder: null postencoder_conf: {} decoder: rnn decoder_conf: num_layers: 2 hidden_size: 1024 sampling_probability: 0 att_conf: atype: location adim: 1024 aconv_chans: 10 aconv_filts: 100 required: - output_dir - token_list version: 0.10.6a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
umarkhalid96/t5-small-trainings
umarkhalid96
2022-04-29T18:36:13Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-04-29T18:27:40Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: t5-small-trainings results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-trainings This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2580 - Rouge1: 41.5251 - Rouge2: 19.8842 - Rougel: 36.4895 - Rougelsum: 37.2565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 3.1338 | 1.0 | 51 | 2.5825 | 35.4169 | 15.379 | 30.8859 | 31.524 | | 2.5905 | 2.0 | 102 | 2.3975 | 38.4266 | 17.2571 | 33.5912 | 34.312 | | 2.3881 | 3.0 | 153 | 2.3329 | 39.8082 | 19.1925 | 34.8269 | 35.5295 | | 2.3167 | 4.0 | 204 | 2.2938 | 41.3488 | 20.1513 | 35.6879 | 36.5864 | | 2.2357 | 5.0 | 255 | 2.2727 | 41.2457 | 19.5358 | 36.0033 | 36.8405 | | 2.232 | 6.0 | 306 | 2.2645 | 41.2746 | 20.0345 | 35.9226 | 36.7001 | | 2.1986 | 7.0 | 357 | 2.2595 | 41.7542 | 19.9428 | 36.6819 | 37.4718 | | 2.1457 | 8.0 | 408 | 2.2580 | 41.5251 | 19.8842 | 36.4895 | 37.2565 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1