modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-63
ed71b5008fc38974652a46f32e52084423957079
2021-08-01T08:43:57.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-63
5
null
transformers
16,300
Entry not found
TehranNLP-org/bert-base-uncased-cls-hatexplain
29993d30bb3c9d06f8e28ca8b56d5088f8290121
2022-05-02T14:26:26.000Z
[ "pytorch", "tf", "bert", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/bert-base-uncased-cls-hatexplain
5
null
transformers
16,301
Entry not found
TehranNLP-org/bert-base-uncased-qqp-2e-5-42
830e2f5dd4e063172c98668d951c74f9eb4d3eef
2021-08-20T05:11:28.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/bert-base-uncased-qqp-2e-5-42
5
null
transformers
16,302
Entry not found
TehranNLP-org/electra-base-avg-cola-2e-5-21
70c82c2b6168d8f233526e332d3ab9b79a533b63
2021-07-23T19:00:00.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-cola-2e-5-21
5
null
transformers
16,303
Entry not found
TehranNLP-org/electra-base-avg-cola-2e-5-42
4b0c88fee4f30d56380e1de0c24e417eff5cce92
2021-07-23T19:22:45.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-cola-2e-5-42
5
null
transformers
16,304
Entry not found
TehranNLP-org/electra-base-avg-cola
627b64187f985d6363a120deffdcf885db684a2d
2021-06-27T21:07:40.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-cola
5
null
transformers
16,305
The uploaded model is from epoch 9 with Matthews Correlation of 66.77 "best_metric": 0.667660908939119,<br> "best_model_checkpoint": "/content/output_dir/checkpoint-2412",<br> "epoch": 10.0,<br> "global_step": 2680,<br> "is_hyper_param_search": false,<br> "is_local_process_zero": true,<br> "is_world_process_zero": true,<br> "max_steps": 2680,<br> "num_train_epochs": 10,<br> "total_flos": 7189983634007040.0,<br> "trial_name": null,<br> "trial_params": null<br> <table class="table table-bordered table-hover table-condensed"> <thead><tr><th title="Field #1">epoch</th> <th title="Field #2">eval_loss</th> <th title="Field #3">eval_matthews_correlation</th> <th title="Field #4">eval_runtime</th> <th title="Field #5">eval_samples_per_second</th> <th title="Field #6">eval_steps_per_second</th> <th title="Field #7">step</th> <th title="Field #8">learning_rate</th> <th title="Field #9">loss</th> </tr></thead> <tbody><tr> <td align="right">1</td> <td align="right">0.5115634202957153</td> <td align="right">0.5385290213636863</td> <td align="right">7.985</td> <td align="right">130.62</td> <td align="right">16.406</td> <td align="right">268</td> <td align="right">0.00009280492497114274</td> <td align="right">0.4622</td> </tr> <tr> <td align="right">2</td> <td align="right">0.4201788902282715</td> <td align="right">0.6035894895952164</td> <td align="right">8.0283</td> <td align="right">129.916</td> <td align="right">16.317</td> <td align="right">536</td> <td align="right">0.00008249326664101577</td> <td align="right">0.2823</td> </tr> <tr> <td align="right">3</td> <td align="right">0.580650806427002</td> <td align="right">0.5574138665741355</td> <td align="right">8.1314</td> <td align="right">128.268</td> <td align="right">16.11</td> <td align="right">804</td> <td align="right">0.00007218160831088881</td> <td align="right">0.1804</td> </tr> <tr> <td align="right">4</td> <td align="right">0.4439031779766083</td> <td align="right">0.6557697896854868</td> <td align="right">8.1435</td> <td align="right">128.078</td> <td align="right">16.087</td> <td align="right">1072</td> <td align="right">0.00006186994998076183</td> <td align="right">0.1357</td> </tr> <tr> <td align="right">5</td> <td align="right">0.5736830830574036</td> <td align="right">0.6249925495853809</td> <td align="right">8.0533</td> <td align="right">129.512</td> <td align="right">16.267</td> <td align="right">1340</td> <td align="right">0.00005155829165063486</td> <td align="right">0.0913</td> </tr> <tr> <td align="right">6</td> <td align="right">0.7729296684265137</td> <td align="right">0.6188970025554703</td> <td align="right">8.081</td> <td align="right">129.068</td> <td align="right">16.211</td> <td align="right">1608</td> <td align="right">0.000041246633320507885</td> <td align="right">0.065</td> </tr> <tr> <td align="right">7</td> <td align="right">0.7351673245429993</td> <td align="right">0.6405767700619004</td> <td align="right">8.1372</td> <td align="right">128.176</td> <td align="right">16.099</td> <td align="right">1876</td> <td align="right">0.00003093497499038092</td> <td align="right">0.0433</td> </tr> <tr> <td align="right">8</td> <td align="right">0.7900031208992004</td> <td align="right">0.6565021466238845</td> <td align="right">8.1095</td> <td align="right">128.615</td> <td align="right">16.154</td> <td align="right">2144</td> <td align="right">0.000020623316660253942</td> <td align="right">0.0199</td> </tr> <tr> <td align="right">9</td> <td align="right">0.8539554476737976</td> <td align="right">0.667660908939119</td> <td align="right">8.1204</td> <td align="right">128.442</td> <td align="right">16.132</td> <td align="right">2412</td> <td align="right">0.000010311658330126971</td> <td align="right">0.0114</td> </tr> <tr> <td align="right">10</td> <td align="right">0.9261117577552795</td> <td align="right">0.660301076782038</td> <td align="right">8.0088</td> <td align="right">130.231</td> <td align="right">16.357</td> <td align="right">2680</td> <td align="right">0</td> <td align="right">0.0066</td> </tr> </tbody></table>
TehranNLP-org/electra-base-avg-mnli-2e-5-63
10d567724a17c7c077ecb11ae14f38dfa9381de3
2021-07-22T08:47:05.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-mnli-2e-5-63
5
null
transformers
16,306
Entry not found
TehranNLP-org/electra-base-avg-mnli-2e-5
d2d0b134a1f6193fa297ea6edfb9de5de1d65525
2021-07-09T13:14:53.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-mnli-2e-5
5
null
transformers
16,307
Entry not found
TehranNLP-org/electra-base-avg-sst2-2e-5-42
c43290699f62e6876bcf270546530f60b2ab2bb1
2021-07-31T15:00:19.000Z
[ "pytorch", "electra", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/electra-base-avg-sst2-2e-5-42
5
null
transformers
16,308
Entry not found
TehranNLP-org/roberta-base-mrpc-2e-5-42
74bf5db65a0d331399d123ae92c8183680f54b61
2021-08-18T18:39:16.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/roberta-base-mrpc-2e-5-42
5
null
transformers
16,309
Entry not found
TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-42
708878ada9f3087628f8a158b0dc8e4573e90f23
2021-07-23T14:33:01.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-42
5
null
transformers
16,310
Entry not found
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-21
8b7f89d80160e805d971c829898452b07dc9bc10
2021-07-21T18:25:47.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-21
5
null
transformers
16,311
Entry not found
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-63
485315e8478910d09c77fbee2836a3df05d9e33e
2021-07-22T17:29:57.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-63
5
null
transformers
16,312
Entry not found
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5
2902ccdcdb0a9442f4a003265050091cae8cff7a
2021-07-09T12:17:44.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5
5
null
transformers
16,313
Entry not found
TehranNLP-org/xlnet-base-cased-avg-mnli
64d4525b96435622896a858e799e37653e5e8e06
2021-07-06T18:34:10.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
TehranNLP-org
null
TehranNLP-org/xlnet-base-cased-avg-mnli
5
null
transformers
16,314
Entry not found
Tejas3/distillbert_base_uncased_80
84d682bf851a51585050a879a885c9d9a3362923
2021-07-06T12:26:22.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
Tejas3
null
Tejas3/distillbert_base_uncased_80
5
null
transformers
16,315
Entry not found
TomO/xlm-roberta-base-finetuned-marc-en
dd5b669d7079a87ba5ccc1df5fa147682f93a0e4
2021-12-16T14:31:13.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "text-classification", "dataset:amazon_reviews_multi", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
TomO
null
TomO/xlm-roberta-base-finetuned-marc-en
5
null
transformers
16,316
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9237 - Mae: 0.5122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9380 | 0.4878 | | 0.9546 | 2.0 | 470 | 0.9237 | 0.5122 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
TransQuest/microtransquest-en_de-wiki
d3fd4a0ff7fe2fb1c3e5809428b6a3d5ae27d75b
2021-06-04T08:21:18.000Z
[ "pytorch", "xlm-roberta", "token-classification", "en-de", "transformers", "Quality Estimation", "microtransquest", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
TransQuest
null
TransQuest/microtransquest-en_de-wiki
5
null
transformers
16,317
--- language: en-de tags: - Quality Estimation - microtransquest license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel import torch model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-wiki", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available()) source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]]) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/monotransquest-hter-en_cs-pharmaceutical
8ff222e3b902defbe930651fd67dabb339162a2e
2021-06-04T08:01:17.000Z
[ "pytorch", "xlm-roberta", "text-classification", "en-cs", "transformers", "Quality Estimation", "monotransquest", "hter", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-hter-en_cs-pharmaceutical
5
null
transformers
16,318
--- language: en-cs tags: - Quality Estimation - monotransquest - hter license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_cs-pharmaceutical", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/monotransquest-hter-en_lv-it-smt
d2064110b7b7a907d79317583c88f65d9399044b
2021-06-04T08:05:12.000Z
[ "pytorch", "xlm-roberta", "text-classification", "en-lv", "transformers", "Quality Estimation", "monotransquest", "hter", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-hter-en_lv-it-smt
5
null
transformers
16,319
--- language: en-lv tags: - Quality Estimation - monotransquest - hter license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_lv-it-smt", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/siamesetransquest-da-multilingual
a623e018b04867b506588f93238156183e74a6b8
2021-06-04T11:15:44.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "multilingual-multilingual", "transformers", "Quality Estimation", "siamesetransquest", "da", "license:apache-2.0" ]
feature-extraction
false
TransQuest
null
TransQuest/siamesetransquest-da-multilingual
5
null
transformers
16,320
--- language: multilingual-multilingual tags: - Quality Estimation - siamesetransquest - da license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-multilingual") predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/siamesetransquest-da-ne_en-wiki
ff2fca29a8fa21e3cffdec1f0eb374ebb855c361
2021-06-04T11:20:50.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "ne-en", "transformers", "Quality Estimation", "siamesetransquest", "da", "license:apache-2.0" ]
feature-extraction
false
TransQuest
null
TransQuest/siamesetransquest-da-ne_en-wiki
5
null
transformers
16,321
--- language: ne-en tags: - Quality Estimation - siamesetransquest - da license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ne_en-wiki") predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TransQuest/siamesetransquest-da-ro_en-wiki
51e12e62da0498c08d6498c04c76e342a9ffd579
2021-06-04T08:14:24.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "ro-en", "transformers", "Quality Estimation", "siamesetransquest", "da", "license:apache-2.0" ]
feature-extraction
false
TransQuest
null
TransQuest/siamesetransquest-da-ro_en-wiki
5
null
transformers
16,322
--- language: ro-en tags: - Quality Estimation - siamesetransquest - da license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ro_en-wiki") predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
TuhinColumbia/dutchpoetrymany
b14e912dbd7e54a0a1e52f37234d2f8c7dcf8a6a
2021-09-06T17:18:27.000Z
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
TuhinColumbia
null
TuhinColumbia/dutchpoetrymany
5
null
transformers
16,323
Entry not found
Unbabel/XLM-R-18L
441c0cba3edef0e19bbdbc26124b3dcbc877d2be
2022-01-05T20:49:12.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "transformers" ]
feature-extraction
false
Unbabel
null
Unbabel/XLM-R-18L
5
null
transformers
16,324
Entry not found
Unbabel/XLM-R-4L
9a4a46f853952b85fc21feb13b926d02a75be992
2022-01-05T19:10:49.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "transformers" ]
feature-extraction
false
Unbabel
null
Unbabel/XLM-R-4L
5
null
transformers
16,325
Entry not found
V3RX2000/distilbert-base-uncased-finetuned-ner
ccf14ef0e43026d56c8721b2ac5f1dc6a3604286
2021-10-13T02:30:36.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
V3RX2000
null
V3RX2000/distilbert-base-uncased-finetuned-ner
5
null
transformers
16,326
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9272043367629162 - name: Recall type: recall value: 0.9375769101689228 - name: F1 type: f1 value: 0.932361775503393 - name: Accuracy type: accuracy value: 0.984193051297123 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9272 - Recall: 0.9376 - F1: 0.9324 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2495 | 1.0 | 878 | 0.0701 | 0.9191 | 0.9229 | 0.9210 | 0.9815 | | 0.0526 | 2.0 | 1756 | 0.0613 | 0.9216 | 0.9350 | 0.9283 | 0.9832 | | 0.0312 | 3.0 | 2634 | 0.0612 | 0.9272 | 0.9376 | 0.9324 | 0.9842 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
VaibhS/quantized_model_update
d591c8d0a56b02a13f174bebb7965d0fa69a2000
2022-01-04T10:38:03.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
VaibhS
null
VaibhS/quantized_model_update
5
null
transformers
16,327
Entry not found
Vasudev/discharge_albert
4961d8f010d95d46663251577f95a76b264d2a52
2021-05-17T10:37:47.000Z
[ "pytorch", "albert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Vasudev
null
Vasudev/discharge_albert
5
null
transformers
16,328
Entry not found
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
0cb1b54d9a1c0fb40997383505363c454c38f9ca
2022-01-24T10:30:44.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
Vibharkchauhan
null
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
5
null
transformers
16,329
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9192622045504749 - name: Recall type: recall value: 0.9310884886452623 - name: F1 type: f1 value: 0.9251375534930251 - name: Accuracy type: accuracy value: 0.9823820039080496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Precision: 0.9193 - Recall: 0.9311 - F1: 0.9251 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2393 | 1.0 | 878 | 0.0732 | 0.9052 | 0.9207 | 0.9129 | 0.9801 | | 0.0569 | 2.0 | 1756 | 0.0626 | 0.9193 | 0.9311 | 0.9251 | 0.9824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
Vilnius-Lithuania-iGEM/Albumin
b4ff0481da84a277b5a303cf5f04133690384da8
2021-09-13T18:15:57.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Vilnius-Lithuania-iGEM
null
Vilnius-Lithuania-iGEM/Albumin
5
null
transformers
16,330
# Albumin-15s ## Model description This is a version of [Albert-base-v2](https://huggingface.co/albert-base-v2) for 15's long aptamers comparison to determine which one is more affine to target protein Albumin. The Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences. More information can be found in our [github]() and our iGEMs [wiki](). ## Intended uses & limitations You can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets. #### How to use This model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin. ```python from transformers import AutoTokenizer, BertModel mname = "Vilnius-Lithuania-iGEM/Albumin" model = BertModel.from_pretrained(mname) ``` To predict batches of sequences you have to employ custom functions shown in [git/prediction.ipynb]() #### Limitations and bias It seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin). ## Eval results accuracy : 0.8601 precision: 0.8515 recall : 0.8725 f1 : 0.8618 roc_auc : 0.9388 The score was calculated using sklearn.metrics.
Wasabi42/Joker_Model
1624a5140c7289e1474842c381dd0ad5b7ad5115
2022-02-24T01:57:46.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Wasabi42
null
Wasabi42/Joker_Model
5
null
transformers
16,331
Entry not found
Wataru/sentence-roberta-tiny
03fc6795eeab903c529dfbecd0d3e0a3aee641bb
2021-12-06T03:39:45.000Z
[ "pytorch", "feature-extraction", "transformers" ]
feature-extraction
false
Wataru
null
Wataru/sentence-roberta-tiny
5
null
transformers
16,332
Entry not found
WikinewsSum/t5-base-with-title-multi-de-wiki-news
1c49fa27ae4f9863b70bde4f12c417537eff833c
2021-06-23T10:43:35.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
WikinewsSum
null
WikinewsSum/t5-base-with-title-multi-de-wiki-news
5
null
transformers
16,333
Entry not found
WikinewsSum/t5-base-with-title-multi-fr-wiki-news
959d0282c317f87a9116612591928f9addecf700
2021-06-23T10:46:06.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
WikinewsSum
null
WikinewsSum/t5-base-with-title-multi-fr-wiki-news
5
null
transformers
16,334
Entry not found
Win-Win-option/RUT5-for-salaries
10d70cd17e34dd1923f1431ff878d30bc215ca96
2021-10-16T14:08:38.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Win-Win-option
null
Win-Win-option/RUT5-for-salaries
5
null
transformers
16,335
Entry not found
Yanjie/message-preamble
1ce1760fbde7eb67fe697e62b91dfb4fc771a928
2022-03-21T18:33:28.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
Yanjie
null
Yanjie/message-preamble
5
null
transformers
16,336
This is the concierge preamble model. Fined tuned on DistilBert uncased model.
Zane/Ricky
06a3764500d9d85866547850884b1f70f2ca5eb8
2021-07-29T14:20:26.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
Zane
null
Zane/Ricky
5
null
transformers
16,337
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- # DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Zane/Ricky") model = AutoModelWithLMHead.from_pretrained("Zane/Ricky") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("NekuBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
aXhyra/demo_emotion_42
e59b2fcc8e0c41342c6123ebf725d3471c997f93
2021-12-13T18:13:57.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/demo_emotion_42
5
null
transformers
16,338
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: demo_emotion_42 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: emotion metrics: - name: F1 type: f1 value: 0.7348035780583043 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_emotion_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9818 - F1: 0.7348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.551070618629693e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.7431 | 0.6530 | | No log | 2.0 | 408 | 0.6943 | 0.7333 | | 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 | | 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aXhyra/demo_irony_42
480c9db07a03cf28662c0572c8b7cd9b063825b6
2021-12-13T17:51:38.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/demo_irony_42
5
null
transformers
16,339
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: demo_irony_42 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: irony metrics: - name: F1 type: f1 value: 0.685764300192161 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_irony_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2905 - F1: 0.6858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7735294032820418e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.5872 | 0.6786 | | 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 | | 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 | | 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aXhyra/hate_trained_1234567
c9e6bbd1d744c149c9817637a231c53d4ce9c02f
2021-12-12T13:02:26.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/hate_trained_1234567
5
null
transformers
16,340
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: hate_trained_1234567 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: F1 type: f1 value: 0.7750768993843997 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.7912 - F1: 0.7751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4835 | 1.0 | 563 | 0.4881 | 0.7534 | | 0.3236 | 2.0 | 1126 | 0.5294 | 0.7610 | | 0.219 | 3.0 | 1689 | 0.6095 | 0.7717 | | 0.1409 | 4.0 | 2252 | 0.7912 | 0.7751 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aXhyra/hate_trained_31415
9b522e9daec11d579cd9eb8c8a7f9846377a11e2
2021-12-12T12:57:50.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/hate_trained_31415
5
null
transformers
16,341
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: hate_trained_31415 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: F1 type: f1 value: 0.7729447444817463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8568 - F1: 0.7729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.482 | 1.0 | 563 | 0.4973 | 0.7672 | | 0.3316 | 2.0 | 1126 | 0.4931 | 0.7794 | | 0.2308 | 3.0 | 1689 | 0.7073 | 0.7593 | | 0.1444 | 4.0 | 2252 | 0.8568 | 0.7729 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aXhyra/hate_trained_42
acca7d0a1b77fadec778000f6d796a0dc8228b98
2021-12-12T12:46:30.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/hate_trained_42
5
null
transformers
16,342
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: hate_trained_42 results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: hate metrics: - name: F1 type: f1 value: 0.7712319060633668 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8994 - F1: 0.7712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4835 | 1.0 | 563 | 0.4855 | 0.7556 | | 0.3277 | 2.0 | 1126 | 0.5354 | 0.7704 | | 0.2112 | 3.0 | 1689 | 0.6870 | 0.7751 | | 0.1384 | 4.0 | 2252 | 0.8994 | 0.7712 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aXhyra/sentiment_temp
f876fc75711c809710e14b6f52f916ab8520336c
2021-12-11T01:36:38.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
aXhyra
null
aXhyra/sentiment_temp
5
null
transformers
16,343
Entry not found
aXhyra/test_irony_trained_test
f966877b19a954c35b2b02a61862c9e57e8711fe
2021-12-12T17:02:51.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:tweet_eval", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
aXhyra
null
aXhyra/test_irony_trained_test
5
null
transformers
16,344
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - f1 model-index: - name: test_irony_trained_test results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval args: irony metrics: - name: F1 type: f1 value: 0.6680395323922843 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_irony_trained_test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.7674 - F1: 0.6680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.207906329883037e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.6655 | 0.5924 | | 0.684 | 2.0 | 716 | 0.6889 | 0.6024 | | 0.5826 | 3.0 | 1074 | 0.7085 | 0.6488 | | 0.5826 | 4.0 | 1432 | 0.7674 | 0.6680 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
aapot/wav2vec2-xlsr-300m-finnish-lm
ee8bd1801a504ba85c3ef6c9216d55f326f38520
2022-03-28T17:22:08.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fi", "dataset:mozilla-foundation/common_voice_7_0", "arxiv:2111.09296", "transformers", "finnish", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
aapot
null
aapot/wav2vec2-xlsr-300m-finnish-lm
5
null
transformers
16,345
--- license: apache-2.0 language: fi metrics: - wer - cer tags: - automatic-speech-recognition - fi - finnish - generated_from_trainer - hf-asr-leaderboard - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: wav2vec2-xlsr-300m-finnish-lm results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: fi metrics: - name: Test WER type: wer value: 8.16 - name: Test CER type: cer value: 1.97 --- # Wav2Vec2 XLS-R for Finnish ASR This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in [this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20). This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model. **Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization. ## Model description Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296). This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR. ## Intended uses & limitations You can use this model for Finnish ASR (speech-to-text) task. ### How to use Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model. ### Limitations and bias This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking). A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example. The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding. ## Training data This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets: | Dataset | Hours | % of total hours | |:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:| | [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % | | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % | | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % | | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % | | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % | | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % | Datasets were filtered to include maximum length of 20 seconds long audio samples. ## Training procedure This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud. Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets. For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-04 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters: - attention_dropout: 0.094 - hidden_dropout: 0.047 - feat_proj_dropout: 0.04 - mask_time_prob: 0.082 - layerdrop: 0.041 - activation_dropout: 0.055 - ctc_loss_reduction: "mean" ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.973 | 0.17 | 500 | 0.5750 | 0.6844 | | 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 | | 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 | | 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 | | 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 | | 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 | | 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 | | 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 | | 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 | | 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 | | 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 | | 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 | | 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 | | 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 | | 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 | | 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 | | 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 | | 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 | | 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 | | 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 | | 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 | | 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 | | 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 | | 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 | | 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 | | 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 | | 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 | | 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 | | 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 | | 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 | | 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 | | 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 | | 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 | | 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 | | 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 | | 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 | | 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 | | 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 | | 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 | | 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 | | 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 | | 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 | | 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 | | 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 | | 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 | | 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 | | 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 | | 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 | | 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 | | 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 | | 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 | | 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 | | 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 | | 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 | | 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 | | 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 | | 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 | | 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 | | 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0 ## Evaluation results Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). To evaluate this model, run the `eval.py` script in this repository: ```bash python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test ``` This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models: | | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) | |-----------------------------------------|---------------|------------------|---------------|------------------| |aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** | |aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 | |aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 | ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
aarnphm/finetune_emotion_distilroberta
28d85eb7127fb3e94f12dfcf80e59ed5d9986097
2022-02-23T01:58:51.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aarnphm
null
aarnphm/finetune_emotion_distilroberta
5
null
transformers
16,346
Entry not found
abdelkader/pegasus-samsum
633d1e5d630ddb0a45f68f11adec57dd7630edaa
2022-01-14T21:30:36.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "dataset:samsum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
abdelkader
null
abdelkader/pegasus-samsum
5
null
transformers
16,347
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6936 | 0.54 | 500 | 1.4844 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
abhishek/autonlp-fred2-2682064
15eec60f5bf45ed1d9231622a0638d40802fcef7
2021-07-30T13:11:02.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:abhishek/autonlp-data-fred2", "transformers", "autonlp" ]
text-classification
false
abhishek
null
abhishek/autonlp-fred2-2682064
5
null
transformers
16,348
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-fred2 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 2682064 ## Validation Metrics - Loss: 0.4454168379306793 - Accuracy: 0.8188976377952756 - Precision: 0.8442028985507246 - Recall: 0.7103658536585366 - AUC: 0.8699702146791053 - F1: 0.771523178807947 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-fred2-2682064 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-fred2-2682064", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-fred2-2682064", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
abhishek/autonlp-imdb-roberta-base-3662644
7a334902f5d8351db4baaf775183b7e9075817c0
2022-02-04T14:25:35.000Z
[ "pytorch", "roberta", "text-classification", "unk", "dataset:abhishek/autonlp-data-imdb-roberta-base", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
abhishek
null
abhishek/autonlp-imdb-roberta-base-3662644
5
null
transformers
16,349
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - abhishek/autonlp-data-imdb-roberta-base co2_eq_emissions: 25.894117734124272 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 3662644 - CO2 Emissions (in grams): 25.894117734124272 ## Validation Metrics - Loss: 0.20277436077594757 - Accuracy: 0.92604 - Precision: 0.9560674830864092 - Recall: 0.89312 - AUC: 0.9814625504000001 - F1: 0.9235223559581421 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers
5ebad34642f11f1a96caa2653458b2a23ee98b85
2019-12-25T17:08:38.000Z
[ "pytorch", "transformers" ]
null
false
adamlin
null
adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers
5
null
transformers
16,350
Entry not found
adamlin/ml999_wood
047289659e809a0d9590592ac070132c396841bf
2021-12-20T16:44:24.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
false
adamlin
null
adamlin/ml999_wood
5
null
transformers
16,351
Entry not found
addy88/T5-23-emotions-detections
a973aad4439dcded7964ef106e9c86154ce240a7
2022-01-17T12:08:03.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
addy88
null
addy88/T5-23-emotions-detections
5
null
transformers
16,352
### How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("addy88/T5-23-emotions-detections") tokenizer = T5Tokenizer.from_pretrained("addy88/T5-23-emotions-detections") text_to_summarize="emotion: i don't like it this is nonsense." input_ids = tokenizer.encode(text_to_summarize, return_tensors="pt", add_special_tokens=True) input_ids = input_ids.to(self.device) generated_ids = model.generate( input_ids=input_ids, num_beams=2, max_length=512, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, top_p=0.95, top_k=50, num_return_sequences=1, ) preds = [tokenizer.decode(g,skip_special_tokens=True,clean_up_tokenization_spaces=True,)for g in generated_ids] ```
addy88/eli5-all-mpnet-base-v2
7f5cfab9bb2f77acac3ac71f7cbd15453cd7c771
2022-01-14T13:24:40.000Z
[ "pytorch", "roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
addy88
null
addy88/eli5-all-mpnet-base-v2
5
null
sentence-transformers
16,353
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Finetune on [ELI5](https://huggingface.co/datasets/eli5) <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('addy88/eli5-all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('addy88/eli5-all-mpnet-base-v2') model = AutoModel.from_pretrained('addy88/eli5-all-mpnet-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=addy88/eli5-all-mpnet-base-v2) ## Training The model was trained with the parameters: **DataLoader**: `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 14393 with parameters: ``` {'batch_size': 16} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1439, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
addy88/hubert-base-timit-demo-colab
6d1b34405db267cbe77c24f21828aa77b6a4b0cd
2021-12-12T12:13:30.000Z
[ "pytorch", "tensorboard", "hubert", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
addy88
null
addy88/hubert-base-timit-demo-colab
5
null
transformers
16,354
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: hubert-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-timit-demo-colab This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1092 - Wer: 0.1728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.4664 | 4.0 | 500 | 2.3026 | 0.9866 | | 0.8171 | 8.0 | 1000 | 0.0980 | 0.1885 | | 0.2983 | 12.0 | 1500 | 0.0943 | 0.1750 | | 0.1769 | 16.0 | 2000 | 0.0990 | 0.1737 | | 0.1823 | 20.0 | 2500 | 0.1068 | 0.1757 | | 0.0761 | 24.0 | 3000 | 0.1041 | 0.1719 | | 0.0993 | 28.0 | 3500 | 0.1092 | 0.1728 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
addy88/wav2vec2-urdu-stt
225f8cb920cbfa29d20f7a54e7968c6f8c3c7372
2021-12-19T15:47:47.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
false
addy88
null
addy88/wav2vec2-urdu-stt
5
null
transformers
16,355
## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-urdu-stt") model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-urdu-stt") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ```
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-roberta-base
39afb411f83e8de719fa93dc4c6e4e357883ec0a
2021-11-22T15:17:03.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-iitp_pdt_review-additionalpretrained-roberta-base
5
null
transformers
16,356
Entry not found
aditeyabaral/finetuned-iitp_pdt_review-distilbert-hinglish-small
e0cd4a5381930b70921b627e60986a1927ef5d93
2021-11-26T17:29:29.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-iitp_pdt_review-distilbert-hinglish-small
5
null
transformers
16,357
Entry not found
aditeyabaral/finetuned-iitp_pdt_review-roberta-base
d6064bf6b140bf5fa7f8d70678568844e60aa265
2021-11-25T20:49:26.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-iitp_pdt_review-roberta-base
5
null
transformers
16,358
Entry not found
aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-big
c864b45ece278694dd789ef64776d4df2683693d
2021-11-26T18:05:10.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-iitp_pdt_review-roberta-hinglish-big
5
null
transformers
16,359
Entry not found
aditeyabaral/finetuned-iitpmovie-additionalpretrained-bert-base-cased
1ebbdfc5497dd1a9c78d9605a13c727ac44fab4a
2021-11-23T17:24:12.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-iitpmovie-additionalpretrained-bert-base-cased
5
null
transformers
16,360
Entry not found
aditeyabaral/finetuned-sail2017-additionalpretrained-bert-base-cased
a1e2dd903b7e963d1d9b96936dff0c02c165d0af
2021-11-14T15:51:34.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-additionalpretrained-bert-base-cased
5
null
transformers
16,361
Entry not found
aditeyabaral/finetuned-sail2017-additionalpretrained-distilbert-base-cased
8f95adf562e1162bba2dfbe0ac156663ea421f59
2021-11-14T15:30:29.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-additionalpretrained-distilbert-base-cased
5
null
transformers
16,362
Entry not found
aditeyabaral/finetuned-sail2017-additionalpretrained-indic-bert
b668d2b21d34e3daea16710926e79d6a4b463f2c
2021-11-14T16:18:10.000Z
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-additionalpretrained-indic-bert
5
null
transformers
16,363
Entry not found
aditeyabaral/finetuned-sail2017-additionalpretrained-roberta-base
bac621e711127ecf382364adbbb1d2b0a3a7bf8a
2021-11-14T15:28:30.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-additionalpretrained-roberta-base
5
null
transformers
16,364
Entry not found
aditeyabaral/finetuned-sail2017-additionalpretrained-xlm-roberta-base
31ec516cb63868ff07d94abbef5a3d59f3983daa
2021-11-14T15:37:12.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-additionalpretrained-xlm-roberta-base
5
null
transformers
16,365
Entry not found
aditeyabaral/finetuned-sail2017-distilbert-base-cased
9ae7348221c0c63cae75c43c771b08f7046cc46b
2021-11-14T15:25:13.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-distilbert-base-cased
5
null
transformers
16,366
Entry not found
aditeyabaral/finetuned-sail2017-roberta-base
ad2f5ce1679e437cb58b018baca938a859fc1a92
2021-11-14T15:23:20.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-roberta-base
5
null
transformers
16,367
Entry not found
aditeyabaral/finetuned-sail2017-xlm-roberta-base
4684e229c1f56edb6a833968aee2e16f6e617d68
2021-11-14T15:47:32.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
false
aditeyabaral
null
aditeyabaral/finetuned-sail2017-xlm-roberta-base
5
null
transformers
16,368
Entry not found
aditi2222/t5_paraphrase_updated
147dc4dd84adb913a196fcbe8f846d8dba564d19
2021-11-30T07:57:43.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
aditi2222
null
aditi2222/t5_paraphrase_updated
5
null
transformers
16,369
Entry not found
adp12/cs410finetune1
20a716aa165a89275c5dc2e7852ae48e1e0fb563
2021-12-09T03:15:54.000Z
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
false
adp12
null
adp12/cs410finetune1
5
null
transformers
16,370
Entry not found
adriansyahdr/adrBert-base-p2
80378d49709b6baef526cd46d50e7f74ff3c1235
2021-05-18T23:11:14.000Z
[ "pytorch", "tf", "jax", "bert", "pretraining", "transformers" ]
null
false
adriansyahdr
null
adriansyahdr/adrBert-base-p2
5
null
transformers
16,371
Entry not found
aicast/bert_finetuning_test
43de0e55785c005164942cb716c2f029d961e94f
2021-05-18T23:17:12.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
aicast
null
aicast/bert_finetuning_test
5
null
transformers
16,372
Entry not found
aidj/distilbert-base-uncased-finetuned-ner
03ad6ebbbf48a70a5ea51b5788928911ef7aa3d3
2022-02-07T07:19:58.000Z
[ "pytorch", "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
aidj
null
aidj/distilbert-base-uncased-finetuned-ner
5
null
transformers
16,373
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9260322366968425 - name: Recall type: recall value: 0.9383599955252265 - name: F1 type: f1 value: 0.9321553592265377 - name: Accuracy type: accuracy value: 0.9834146186474335 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9260 - Recall: 0.9384 - F1: 0.9322 - Accuracy: 0.9834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2545 | 1.0 | 878 | 0.0711 | 0.9096 | 0.9214 | 0.9154 | 0.9800 | | 0.0555 | 2.0 | 1756 | 0.0593 | 0.9185 | 0.9356 | 0.9270 | 0.9827 | | 0.0297 | 3.0 | 2634 | 0.0607 | 0.9260 | 0.9384 | 0.9322 | 0.9834 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
airKlizz/mt5-base-wikinewssum-spanish
ef63dd1e25ba9d328bb4fad0ae8d86577212fab3
2021-12-25T23:19:15.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "summarization", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
airKlizz
null
airKlizz/mt5-base-wikinewssum-spanish
5
null
transformers
16,374
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-base-wikinewssum-spanish results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-wikinewssum-spanish This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2394 - Rouge1: 7.9732 - Rouge2: 3.5041 - Rougel: 6.6713 - Rougelsum: 7.5229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 528 | 2.3707 | 6.687 | 2.9169 | 5.6793 | 6.2978 | | No log | 2.0 | 1056 | 2.3140 | 7.9518 | 3.4529 | 6.7265 | 7.4984 | | No log | 3.0 | 1584 | 2.2848 | 7.9708 | 3.5344 | 6.7272 | 7.534 | | No log | 4.0 | 2112 | 2.2668 | 8.0252 | 3.5323 | 6.7319 | 7.5819 | | 3.2944 | 5.0 | 2640 | 2.2532 | 8.0143 | 3.534 | 6.7155 | 7.582 | | 3.2944 | 6.0 | 3168 | 2.2399 | 7.9525 | 3.4849 | 6.6716 | 7.5155 | | 3.2944 | 7.0 | 3696 | 2.2376 | 7.9405 | 3.4661 | 6.6559 | 7.5043 | | 3.2944 | 8.0 | 4224 | 2.2394 | 7.9732 | 3.5041 | 6.6713 | 7.5229 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.1 - Datasets 1.16.1 - Tokenizers 0.10.3
airKlizz/t5-base-with-title-multi-en-wiki-news
d6ee01f5be4af0baa32c33b3ce7fdc48d7c8e6f8
2021-06-23T10:59:29.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
airKlizz
null
airKlizz/t5-base-with-title-multi-en-wiki-news
5
null
transformers
16,375
Entry not found
alexbrandsen/ArcheoBERTje
67f9329ca61ab9756dcb1255c37085caa6fce9b7
2021-05-18T23:22:51.000Z
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
alexbrandsen
null
alexbrandsen/ArcheoBERTje
5
null
transformers
16,376
# ArcheoBERTje A Dutch BERT model for the Archaeology domain This model is based on the Dutch BERTje model by wietsedv (https://github.com/wietsedv/bertje). We further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (https://easy.dans.knaw.nl/ui/home).
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argmining
5f134309fcd55bb011757a3dc3bd65895d498aaf
2022-02-25T20:27:49.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "transformers" ]
text-classification
false
ali2066
null
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argmining
5
null
transformers
16,377
Entry not found
alireza7/ARMAN-SH-persian-base-parsinlu-textual-entailment
50e180aae86396406525ca508ac22ab7a7172226
2021-09-29T19:19:02.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/ARMAN-SH-persian-base-parsinlu-textual-entailment
5
null
transformers
16,378
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
alireza7/ARMAN-SS-100-persian-base-perkey-summary
70022911b9bb0ca5882725b1c267bb0cf9e78d07
2021-09-29T19:21:12.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/ARMAN-SS-100-persian-base-perkey-summary
5
null
transformers
16,379
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
alireza7/ARMAN-SS-80-persian-base-parsinlu-textual-entailment
586c3428fa12df99920421c0aa63178b80adc796
2021-09-29T19:23:20.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/ARMAN-SS-80-persian-base-parsinlu-textual-entailment
5
null
transformers
16,380
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
alireza7/PEGASUS-persian-base-parsinlu-qqp
b95d38988987f76c5fce5ca97390b7a2f7f843c5
2021-09-29T19:25:17.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/PEGASUS-persian-base-parsinlu-qqp
5
null
transformers
16,381
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
alireza7/TRANSFORMER-persian-base-voa-title
73cf128592be0055624ac5a65494c42c5cb5500b
2021-09-29T19:26:59.000Z
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
alireza7
null
alireza7/TRANSFORMER-persian-base-voa-title
5
null
transformers
16,382
More information about models is available [here](https://github.com/alirezasalemi7/ARMAN).
allenai/dsp_roberta_base_dapt_news_tapt_ag_115K
45442e6edbc6c5cdc5fbd13f70a44cd6c40e3db9
2021-05-20T13:10:52.000Z
[ "pytorch", "jax", "roberta", "transformers" ]
null
false
allenai
null
allenai/dsp_roberta_base_dapt_news_tapt_ag_115K
5
null
transformers
16,383
Entry not found
allenai/dsp_roberta_base_dapt_reviews_tapt_imdb_20000
e4c43c9dd34f5983b649bcae673c5b15df5453e8
2021-05-20T13:15:59.000Z
[ "pytorch", "jax", "roberta", "transformers" ]
null
false
allenai
null
allenai/dsp_roberta_base_dapt_reviews_tapt_imdb_20000
5
null
transformers
16,384
Entry not found
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_8
3f199022468646fe157f3e5ff604f426fdd5be9b
2021-11-24T13:10:27.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "en", "transformers", "ami", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
ami-wav2vec2
null
ami-wav2vec2/wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_8
5
null
transformers
16,385
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - ami - generated_from_trainer model-index: - name: wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-lv60-ami_multi-tune_dropout_0.0001_8 This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the AMI-IHM dataset. It achieves the following results on the evaluation set: - Loss: 1.4880 - Wer: 0.4295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.8417 | 0.86 | 1000 | 2.6883 | 0.9997 | | 1.5626 | 1.72 | 2000 | 1.4253 | 0.4517 | | 1.4476 | 2.59 | 3000 | 1.3356 | 0.4157 | | 1.3874 | 3.45 | 4000 | 1.2814 | 0.4073 | | 1.3391 | 4.31 | 5000 | 1.2700 | 0.4044 | | 1.2983 | 5.17 | 6000 | 1.2423 | 0.3967 | | 1.2618 | 6.03 | 7000 | 1.2429 | 0.3879 | | 1.2414 | 6.9 | 8000 | 1.2290 | 0.3878 | | 1.2286 | 7.76 | 9000 | 1.2301 | 0.3882 | | 1.2254 | 8.62 | 10000 | 1.2140 | 0.3885 | | 1.2257 | 9.48 | 11000 | 1.2154 | 0.3840 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1 - Datasets 1.12.2.dev0 - Tokenizers 0.10.3
anantoj/wav2vec2-adult-child-cls
d4268c509d83d15b1bde05efd00b13ea17d12b4c
2022-02-23T14:29:03.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
audio-classification
false
anantoj
null
anantoj/wav2vec2-adult-child-cls
5
null
transformers
16,386
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: wav2vec2-adult-child-cls results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-adult-child-cls This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1713 - Accuracy: 0.9460 - F1: 0.9509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.323 | 1.0 | 96 | 0.2699 | 0.9026 | 0.9085 | | 0.2003 | 2.0 | 192 | 0.2005 | 0.9234 | 0.9300 | | 0.1808 | 3.0 | 288 | 0.1780 | 0.9377 | 0.9438 | | 0.1537 | 4.0 | 384 | 0.1673 | 0.9441 | 0.9488 | | 0.1135 | 5.0 | 480 | 0.1713 | 0.9460 | 0.9509 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
anas-awadalla/bert-medium-pretrained-finetuned-squad
9cfffda54a415e7e3b134cf9dee2d0820823468c
2022-01-27T06:07:11.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/bert-medium-pretrained-finetuned-squad
5
null
transformers
16,387
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert_medium_pretrain_squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_medium_pretrain_squad This model is a fine-tuned version of [anas-awadalla/bert-medium-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-medium-pretrained-on-squad) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 0.0973 - "exact_match": 77.95648060548723 - "f1": 85.85300366384631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-small-finetuned-squad
7aba00b039646623c5f50e03c284f761b657dc0b
2022-01-24T19:25:29.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/bert-small-finetuned-squad
5
null
transformers
16,388
--- tags: - generated_from_trainer datasets: - squad model-index: - name: bert-small-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-squad This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the squad dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3138 - eval_runtime: 46.6577 - eval_samples_per_second: 231.13 - eval_steps_per_second: 14.446 - epoch: 4.0 - step: 22132 {'exact_match': 71.05960264900662, 'f1': 80.8260245470904} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
anas-awadalla/bert-small-pretrained-finetuned-squad
82a5c28c328a7ea82d957bf639e38222b753da1e
2022-01-27T06:09:41.000Z
[ "pytorch", "bert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
question-answering
false
anas-awadalla
null
anas-awadalla/bert-small-pretrained-finetuned-squad
5
null
transformers
16,389
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: bert-small-pretrained-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-pretrained-finetuned-squad This model is a fine-tuned version of [anas-awadalla/bert-small-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-small-pretrained-on-squad) on the squad dataset. - "exact_match": 72.20435193945127 - "f1": 81.31832229156294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat
ba2d357fb43d754b2cb950907dbd81f35ca3a825
2021-09-20T15:46:27.000Z
[ "pytorch", "bert", "question-answering", "en", "dataset:squad_v2", "dataset:mit_restaurant", "transformers", "generated_from_trainer", "license:cc-by-4.0", "autotrain_compatible" ]
question-answering
false
andi611
null
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat
5
null
transformers
16,390
--- language: - en license: cc-by-4.0 tags: - generated_from_trainer datasets: - squad_v2 - mit_restaurant model_index: - name: bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat results: - task: name: Token Classification type: token-classification dataset: name: squad_v2 type: squad_v2 - task: name: Token Classification type: token-classification dataset: name: mit_restaurant type: mit_restaurant --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
anditya/xlm-roberta-base-finetuned-marc-en
25301a51800344df2e251ac73aed639f8f88a70e
2021-10-22T11:18:11.000Z
[ "pytorch", "tensorboard", "xlm-roberta", "text-classification", "dataset:amazon_reviews_multi", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
anditya
null
anditya/xlm-roberta-base-finetuned-marc-en
5
null
transformers
16,391
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8885 - Mae: 0.4390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1089 | 1.0 | 235 | 0.9027 | 0.4756 | | 0.9674 | 2.0 | 470 | 0.8885 | 0.4390 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
anel/autonlp-cml-412010597
ff4d72074c7ba21147ef783b134426acdb49fe5e
2021-12-13T03:11:37.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:anel/autonlp-data-cml", "transformers", "autonlp", "co2_eq_emissions" ]
text-classification
false
anel
null
anel/autonlp-cml-412010597
5
null
transformers
16,392
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - anel/autonlp-data-cml co2_eq_emissions: 10.411685187181709 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 412010597 - CO2 Emissions (in grams): 10.411685187181709 ## Validation Metrics - Loss: 0.12585781514644623 - Accuracy: 0.9475446428571429 - Precision: 0.9454660748256183 - Recall: 0.964424320827943 - AUC: 0.990229573862156 - F1: 0.9548511047070125 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/anel/autonlp-cml-412010597 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("anel/autonlp-cml-412010597", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
anirudh21/albert-large-v2-finetuned-mrpc
9b0636de195a7ae2612195a2406e3906e1a12aa3
2022-01-28T04:21:22.000Z
[ "pytorch", "tensorboard", "albert", "text-classification", "transformers" ]
text-classification
false
anirudh21
null
anirudh21/albert-large-v2-finetuned-mrpc
5
null
transformers
16,393
Entry not found
anirudh21/distilbert-base-uncased-finetuned-cola
677e73d4925d713d7eb70fe03aa7e3e2783c1055
2022-01-12T07:24:56.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
anirudh21
null
anirudh21/distilbert-base-uncased-finetuned-cola
5
null
transformers
16,394
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5224154837835395 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Matthews Correlation: 0.5224 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 535 | 0.5223 | 0.4007 | | 0.3515 | 2.0 | 1070 | 0.5150 | 0.4993 | | 0.2391 | 3.0 | 1605 | 0.6471 | 0.5103 | | 0.1841 | 4.0 | 2140 | 0.7640 | 0.5153 | | 0.1312 | 5.0 | 2675 | 0.8623 | 0.5224 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
anirudh21/distilbert-base-uncased-finetuned-sst2
e8ee7b975881293dc9d25b4e07561b972b9b2ac2
2022-01-12T14:17:06.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
anirudh21
null
anirudh21/distilbert-base-uncased-finetuned-sst2
5
null
transformers
16,395
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.908256880733945 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.188 | 1.0 | 4210 | 0.3127 | 0.9037 | | 0.1299 | 2.0 | 8420 | 0.3887 | 0.9048 | | 0.0845 | 3.0 | 12630 | 0.4028 | 0.9083 | | 0.0691 | 4.0 | 16840 | 0.3924 | 0.9071 | | 0.052 | 5.0 | 21050 | 0.5047 | 0.9002 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
anirudh21/electra-base-discriminator-finetuned-wnli
8885ec7ae5e6598f1ef9e118a72dca768a7b0713
2022-01-25T04:41:03.000Z
[ "pytorch", "tensorboard", "electra", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
anirudh21
null
anirudh21/electra-base-discriminator-finetuned-wnli
5
null
transformers
16,396
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: electra-base-discriminator-finetuned-wnli results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: wnli metrics: - name: Accuracy type: accuracy value: 0.5633802816901409 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-discriminator-finetuned-wnli This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6893 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6893 | 0.5634 | | No log | 2.0 | 80 | 0.7042 | 0.4225 | | No log | 3.0 | 120 | 0.7008 | 0.3803 | | No log | 4.0 | 160 | 0.6998 | 0.5634 | | No log | 5.0 | 200 | 0.7016 | 0.5352 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
anthonymirand/haha_2019_primary_task
aa18314ec77bd9be2dd769c914553bfca3591fe3
2021-05-18T23:42:53.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
anthonymirand
null
anthonymirand/haha_2019_primary_task
5
null
transformers
16,397
Entry not found
anton-l/wav2vec2-base-finetuned-ks
4bfb6063bc982ffaadd75a7c2957e0ecb2912330
2021-10-21T11:04:30.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "dataset:superb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
audio-classification
false
anton-l
null
anton-l/wav2vec2-base-finetuned-ks
5
null
transformers
16,398
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0952 - Accuracy: 0.9823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7908 | 1.0 | 399 | 0.6776 | 0.9009 | | 0.3202 | 2.0 | 798 | 0.2061 | 0.9763 | | 0.221 | 3.0 | 1197 | 0.1257 | 0.9785 | | 0.1773 | 4.0 | 1596 | 0.0990 | 0.9813 | | 0.1729 | 5.0 | 1995 | 0.0952 | 0.9823 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
aoryabinin/aoryabinin_gpt_ai_dungeon_ru
80543ad509ac8b4d8b96cc4ffc804628527061d2
2021-06-02T17:08:12.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
aoryabinin
null
aoryabinin/aoryabinin_gpt_ai_dungeon_ru
5
null
transformers
16,399
Entry not found