modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
phjhk/hklegal-xlm-r-base-t | ea62e879cda44004b4b335b9d9b15debcd8d4d09 | 2022-07-29T14:53:09.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phjhk | null | phjhk/hklegal-xlm-r-base-t | 4 | null | transformers | 20,500 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [conll2003](https://huggingface.co/datasets/conll2003) dataset in English.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP) or Countries (images):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in English
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
Hong Kong Legal Information Institute [HKILL](https://www.hklii.hk/eng/) is a free, independent, non-profit document database providing the public with legal information relating to Hong Kong. We finetune the XLM-RoBERTa on the HKILL datasets. It contains docments
# Uses
The model is a pretrained-finetuned language model. The model can be used for document classification, Named Entity Recognition (NER), especially on legal domain.
```python
>>> from transformers import pipeline,AutoTokenizer,AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("hklegal-xlm-r-base-t")
>>> model = AutoModelForTokenClassification.from_pretrained("hklegal-xlm-r-base-t")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Alya told Jasmine that Andrew could pay with cash..")
```
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
``` |
jhonparra18/distilbert-base-uncased-cv-studio_name-pooler | afd19b012fd770b2da65b0c941bd26e9ae5c693f | 2022-07-26T22:10:58.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/distilbert-base-uncased-cv-studio_name-pooler | 4 | null | transformers | 20,501 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-cv-studio_name-pooler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-cv-studio_name-pooler
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2209
- Accuracy: 0.6957
- F1 Micro: 0.6957
- F1 Macro: 0.4760
- Precision Micro: 0.6957
- Recall Micro: 0.6957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:------------:|
| 1.6809 | 1.19 | 1000 | 1.4366 | 0.5676 | 0.5676 | 0.2308 | 0.5676 | 0.5676 |
| 1.0632 | 2.39 | 2000 | 1.1178 | 0.6925 | 0.6925 | 0.3878 | 0.6925 | 0.6925 |
| 0.7931 | 3.58 | 3000 | 1.0779 | 0.7072 | 0.7072 | 0.4395 | 0.7072 | 0.7072 |
| 0.6308 | 4.77 | 4000 | 1.0938 | 0.7180 | 0.7180 | 0.4593 | 0.7180 | 0.7180 |
| 0.523 | 5.97 | 5000 | 1.1659 | 0.7192 | 0.7192 | 0.4622 | 0.7192 | 0.7192 |
| 0.3739 | 7.16 | 6000 | 1.2831 | 0.7132 | 0.7132 | 0.4559 | 0.7132 | 0.7132 |
| 0.2687 | 8.35 | 7000 | 1.4216 | 0.7160 | 0.7160 | 0.4662 | 0.7160 | 0.7160 |
| 0.1893 | 9.55 | 8000 | 1.5747 | 0.7096 | 0.7096 | 0.4712 | 0.7096 | 0.7096 |
| 0.1375 | 10.74 | 9000 | 1.7016 | 0.7045 | 0.7045 | 0.4801 | 0.7045 | 0.7045 |
| 0.123 | 11.93 | 10000 | 1.8164 | 0.7001 | 0.7001 | 0.4792 | 0.7001 | 0.7001 |
| 0.0952 | 13.13 | 11000 | 1.9634 | 0.6949 | 0.6949 | 0.4772 | 0.6949 | 0.6949 |
| 0.071 | 14.32 | 12000 | 2.0327 | 0.6981 | 0.6981 | 0.4781 | 0.6981 | 0.6981 |
| 0.0494 | 15.51 | 13000 | 2.0931 | 0.6989 | 0.6989 | 0.4814 | 0.6989 | 0.6989 |
| 0.0417 | 16.71 | 14000 | 2.1644 | 0.6965 | 0.6965 | 0.4771 | 0.6965 | 0.6965 |
| 0.0444 | 17.9 | 15000 | 2.2030 | 0.6953 | 0.6953 | 0.4756 | 0.6953 | 0.6953 |
| 0.0368 | 19.09 | 16000 | 2.2209 | 0.6957 | 0.6957 | 0.4760 | 0.6957 | 0.6957 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
AustinCarthy/distilbert-base-uncased-finetuned-emotion | 646d2254861c94b438f9d343964ac8ca6b028d91 | 2022-07-26T21:35:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | AustinCarthy | null | AustinCarthy/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 20,502 | Entry not found |
jhonparra18/roberta-base-cv-studio_name-pooler | ff93646754f412095ccfdedc112de3db680a6cc6 | 2022-07-27T00:19:33.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/roberta-base-cv-studio_name-pooler | 4 | null | transformers | 20,503 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-cv-studio_name-pooler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-cv-studio_name-pooler
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2635
- Accuracy: 0.6997
- F1 Micro: 0.6997
- F1 Macro: 0.4350
- Precision Micro: 0.6997
- Recall Micro: 0.6997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:------------:|
| 2.3756 | 1.19 | 1000 | 2.3336 | 0.2379 | 0.2379 | 0.0183 | 0.2379 | 0.2379 |
| 1.9046 | 2.39 | 2000 | 1.7667 | 0.4240 | 0.4240 | 0.1103 | 0.4240 | 0.4240 |
| 1.4765 | 3.58 | 3000 | 1.4257 | 0.5764 | 0.5764 | 0.2429 | 0.5764 | 0.5764 |
| 1.282 | 4.77 | 4000 | 1.2953 | 0.6412 | 0.6412 | 0.3192 | 0.6412 | 0.6412 |
| 1.1767 | 5.97 | 5000 | 1.2349 | 0.6551 | 0.6551 | 0.3443 | 0.6551 | 0.6551 |
| 1.0694 | 7.16 | 6000 | 1.1885 | 0.6746 | 0.6746 | 0.3730 | 0.6746 | 0.6746 |
| 0.9443 | 8.35 | 7000 | 1.1674 | 0.6822 | 0.6822 | 0.3921 | 0.6822 | 0.6822 |
| 0.9065 | 9.55 | 8000 | 1.1788 | 0.6854 | 0.6854 | 0.4026 | 0.6854 | 0.6854 |
| 0.845 | 10.74 | 9000 | 1.1722 | 0.6929 | 0.6929 | 0.4174 | 0.6929 | 0.6929 |
| 0.828 | 11.93 | 10000 | 1.1918 | 0.6925 | 0.6925 | 0.4167 | 0.6925 | 0.6925 |
| 0.769 | 13.13 | 11000 | 1.2059 | 0.6953 | 0.6953 | 0.4233 | 0.6953 | 0.6953 |
| 0.7482 | 14.32 | 12000 | 1.2178 | 0.6965 | 0.6965 | 0.4260 | 0.6965 | 0.6965 |
| 0.6897 | 15.51 | 13000 | 1.2290 | 0.7013 | 0.7013 | 0.4338 | 0.7013 | 0.7013 |
| 0.6675 | 16.71 | 14000 | 1.2460 | 0.7013 | 0.7013 | 0.4369 | 0.7013 | 0.7013 |
| 0.6454 | 17.9 | 15000 | 1.2498 | 0.6969 | 0.6969 | 0.4348 | 0.6969 | 0.6969 |
| 0.6279 | 19.09 | 16000 | 1.2635 | 0.6997 | 0.6997 | 0.4350 | 0.6997 | 0.6997 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
helliun/multapro-beta-1 | 8d959f81b825de786d62307902d74562308265fd | 2022-07-27T03:11:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | helliun | null | helliun/multapro-beta-1 | 4 | null | transformers | 20,504 | Entry not found |
Evelyn18/roberta-base-spanish-squades-becasIncentivos2 | 62e5527ae21f61733d377aba7aae0645f3ac3c6c | 2022-07-27T04:02:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becasIncentivos2 | 4 | null | transformers | 20,505 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos2
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 1.6939 |
| No log | 2.0 | 14 | 1.7033 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Evelyn18/roberta-base-spanish-squades-becasIncentivos3 | 0437f65d1cac3ba827217453c3a8b94b6bf34af9 | 2022-07-27T04:22:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-becasIncentivos3 | 4 | null | transformers | 20,506 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos3
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 1.7346 |
| No log | 2.0 | 18 | 1.7701 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/interiordesign | f478f3ebe2db06856e3f114d2312d5f6208ac1b9 | 2022-07-27T15:30:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/interiordesign | 4 | null | transformers | 20,507 | ---
language: en
thumbnail: http://www.huggingtweets.com/interiordesign/1658935819881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544346507578589184/x9URB7Yy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Interior Design</div>
<div style="text-align: center; font-size: 14px;">@interiordesign</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Interior Design.
| Data | Interior Design |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 97 |
| Short tweets | 2 |
| Tweets kept | 3151 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/vl5m9w7s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @interiordesign's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36lgkxh5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/interiordesign')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cjdentra/distilbert-base-uncased-finetuned-emotion | 852cc79bd0e09a62095e7be18c2411fd0ceb45a2 | 2022-07-27T20:38:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | cjdentra | null | cjdentra/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 20,508 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jenwvwmabskvwh/DialoGPT-small-josh445 | 4528ae0c008eb3ff2de237b86319525670787e4c | 2022-07-28T00:49:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jenwvwmabskvwh | null | Jenwvwmabskvwh/DialoGPT-small-josh445 | 4 | null | transformers | 20,509 | ---
tags:
- conversational
---
# Josh DialoGPT Model |
mesolitica/t5-small-finetuned-noisy-en-ms | b9043ceec5433824b7f0480bab02964167cd30bf | 2022-07-28T18:49:38.000Z | [
"pytorch",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mesolitica | null | mesolitica/t5-small-finetuned-noisy-en-ms | 4 | null | transformers | 20,510 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-finetuned-noisy-en-ms
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-noisy-en-ms
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Lvxue/finetuned-mt5-small | e787c219f81dfc25d2db6b00ac4d7984c792b5a5 | 2022-07-29T11:08:43.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"ro",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Lvxue | null | Lvxue/finetuned-mt5-small | 4 | null | transformers | 20,511 | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: finetuned-mt5-small
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 23.6759
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mt5-small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 23.6759
- Gen Len: 43.6993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
oMateos2020/pegasus-newsroom-cnn1_50k | 203ad0e9a757e4c2709439dc041b287639639912 | 2022-07-29T04:30:35.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | oMateos2020 | null | oMateos2020/pegasus-newsroom-cnn1_50k | 4 | null | transformers | 20,512 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-cnn1_50k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-cnn1_50k
This model is a fine-tuned version of [oMateos2020/pegasus-newsroom-cnn1_50k](https://huggingface.co/oMateos2020/pegasus-newsroom-cnn1_50k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1267
- Rouge1: 38.0081
- Rouge2: 16.5536
- Rougel: 26.4916
- Rougelsum: 35.1349
- Gen Len: 59.4912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.144 | 0.26 | 100 | 3.0323 | 38.3168 | 16.7528 | 26.2646 | 35.2447 | 66.2372 |
| 3.0556 | 0.51 | 200 | 3.0351 | 38.39 | 16.8027 | 26.3412 | 35.37 | 67.4676 |
| 3.0701 | 0.77 | 300 | 3.0345 | 38.5742 | 16.922 | 26.3568 | 35.51 | 68.662 |
| 3.1679 | 1.03 | 400 | 3.0321 | 38.5319 | 16.8049 | 26.4933 | 35.4775 | 65.976 |
| 3.1041 | 1.28 | 500 | 3.0246 | 38.1381 | 16.63 | 26.2484 | 35.0999 | 64.6896 |
| 3.0352 | 1.54 | 600 | 3.0206 | 38.9063 | 17.0281 | 27.0288 | 35.9175 | 59.0668 |
| 3.0894 | 1.79 | 700 | 3.0251 | 38.4461 | 16.7732 | 26.4394 | 35.4807 | 63.2792 |
| 3.0529 | 2.05 | 800 | 3.0400 | 38.5088 | 16.8921 | 26.5526 | 35.5236 | 64.294 |
| 3.0002 | 2.31 | 900 | 3.0394 | 38.6899 | 16.8703 | 26.6771 | 35.6207 | 62.8004 |
| 3.0167 | 2.56 | 1000 | 3.0394 | 38.3532 | 16.6176 | 26.5433 | 35.3282 | 61.63 |
| 3.0168 | 2.82 | 1100 | 3.0421 | 38.7613 | 17.0107 | 26.8424 | 35.7508 | 62.67 |
| 3.0412 | 3.08 | 1200 | 3.0491 | 38.6132 | 16.8046 | 26.61 | 35.6002 | 61.7924 |
| 3.1273 | 3.33 | 1300 | 3.0823 | 38.5498 | 16.795 | 26.5569 | 35.613 | 60.6872 |
| 3.0634 | 3.59 | 1400 | 3.1010 | 38.0927 | 16.4367 | 26.2315 | 35.1311 | 59.252 |
| 3.097 | 3.84 | 1500 | 3.1147 | 37.7644 | 16.3156 | 26.2674 | 34.8315 | 59.7592 |
| 3.1287 | 4.1 | 1600 | 3.1267 | 38.0081 | 16.5536 | 26.4916 | 35.1349 | 59.4912 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
HMHMlee/biobert-base-cased-v1.2-finetuned-ner | 54c9cd95511ed81bd647528dcbabd2e7dc925c17 | 2022-07-28T07:15:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | HMHMlee | null | HMHMlee/biobert-base-cased-v1.2-finetuned-ner | 4 | null | transformers | 20,513 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Precision: 0.8561
- Recall: 0.9063
- F1: 0.8805
- Accuracy: 0.9585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.043 | 1.0 | 201 | 0.1611 | 0.8050 | 0.8799 | 0.8408 | 0.9470 |
| 0.175 | 2.0 | 402 | 0.1442 | 0.8244 | 0.8869 | 0.8545 | 0.9530 |
| 0.1655 | 3.0 | 603 | 0.1439 | 0.8379 | 0.9030 | 0.8692 | 0.9563 |
| 0.0797 | 4.0 | 804 | 0.1443 | 0.8520 | 0.8938 | 0.8724 | 0.9580 |
| 0.026 | 5.0 | 1005 | 0.1495 | 0.8561 | 0.9063 | 0.8805 | 0.9585 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
victorcosta/bert-finetuned-ner-accelerate | ed5f284f836da5c468bf1c9343888675c3c3b642 | 2022-07-28T11:40:45.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | victorcosta | null | victorcosta/bert-finetuned-ner-accelerate | 4 | null | transformers | 20,514 | Entry not found |
asparius/even-mixed | 5ed12799aa123016c5cfd80e9d6a5809b27712ea | 2022-07-28T14:20:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | asparius | null | asparius/even-mixed | 4 | null | transformers | 20,515 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: even-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# even-mixed
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2145
- Accuracy: 0.9534
- F1: 0.9534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
espejelomar/vit-base-beans | 5989392ce9e863268aebcf60a8d9f724c1ea09c0 | 2022-07-28T17:23:53.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | espejelomar | null | espejelomar/vit-base-beans | 4 | null | transformers | 20,516 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1387 | 3.85 | 500 | 0.0637 | 0.9850 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
carblacac/xlm-roberta-base-finetuned-panx-de | d0d717585dc7792dcfbdf96f72754c189e8cf39a | 2022-07-28T18:47:01.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | carblacac | null | carblacac/xlm-roberta-base-finetuned-panx-de | 4 | null | transformers | 20,517 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yanaiela/roberta-base-epoch_0 | 5ed4a3cedd6be79ee0866f202b188b114b3508f5 | 2022-07-29T22:38:30.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_0",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_0 | 4 | null | transformers | 20,518 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_0
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 0
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_0.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
liujxing/distilbert-base-uncased-finetuned-emotion | b22ae65a88b6229a45d8793f1f7c1411bfed6fbd | 2022-07-28T20:51:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | liujxing | null | liujxing/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 20,519 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.93589910332286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- Accuracy: 0.9355
- F1: 0.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1386 | 1.0 | 250 | 0.1705 | 0.9355 | 0.9353 |
| 0.0928 | 2.0 | 500 | 0.1484 | 0.9355 | 0.9359 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-DAGPap22-synthetic-all-overfit | e33cf3cb1612960a5cbc9a1cb2c3c91c2d6a0a1e | 2022-07-30T09:31:40.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-finetuned-DAGPap22-synthetic-all-overfit | 4 | null | transformers | 20,520 | Entry not found |
affahrizain/roberta-base-finetuned-jigsaw-toxic | 14694879319f6b36fa788d6cad244984b890a265 | 2022-07-29T07:45:51.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | affahrizain | null | affahrizain/roberta-base-finetuned-jigsaw-toxic | 4 | null | transformers | 20,521 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-jigsaw-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-jigsaw-toxic
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0412
- F1: 0.7908
- Roc Auc: 0.9048
- Accuracy: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.0524 | 1.0 | 2774 | 0.0432 | 0.7805 | 0.8940 | 0.9254 |
| 0.0348 | 2.0 | 5548 | 0.0412 | 0.7908 | 0.9048 | 0.9257 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mariolinml/roberta_large-chunking_0728_v2 | 328a8d34b9278340fff2caaa06117130c60ea62d | 2022-07-29T05:10:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | mariolinml | null | mariolinml/roberta_large-chunking_0728_v2 | 4 | null | transformers | 20,522 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta_large-chunking_0728_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_large-chunking_0728_v2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5270
- Precision: 0.6228
- Recall: 0.6467
- F1: 0.6345
- Accuracy: 0.8153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 125 | 0.5667 | 0.4931 | 0.5415 | 0.5162 | 0.7397 |
| No log | 2.0 | 250 | 0.4839 | 0.5484 | 0.5894 | 0.5682 | 0.7874 |
| No log | 3.0 | 375 | 0.4822 | 0.5997 | 0.6341 | 0.6164 | 0.8085 |
| 0.4673 | 4.0 | 500 | 0.5117 | 0.6023 | 0.6373 | 0.6193 | 0.8120 |
| 0.4673 | 5.0 | 625 | 0.5270 | 0.6228 | 0.6467 | 0.6345 | 0.8153 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
commanderstrife/distilBERT_bio_pv_superset | ac587c27354431c837fd439d3ab67e7a1a72ef22 | 2022-07-29T08:36:40.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | commanderstrife | null | commanderstrife/distilBERT_bio_pv_superset | 4 | null | transformers | 20,523 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_bio_pv_superset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_bio_pv_superset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2328
- Precision: 0.5462
- Recall: 0.5325
- F1: 0.5393
- Accuracy: 0.9495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0964 | 1.0 | 5467 | 0.1593 | 0.4625 | 0.3682 | 0.4100 | 0.9416 |
| 0.1918 | 2.0 | 10934 | 0.1541 | 0.4796 | 0.4658 | 0.4726 | 0.9436 |
| 0.0394 | 3.0 | 16401 | 0.1508 | 0.5349 | 0.4744 | 0.5028 | 0.9482 |
| 0.1207 | 4.0 | 21868 | 0.1615 | 0.5422 | 0.4953 | 0.5177 | 0.9490 |
| 0.0221 | 5.0 | 27335 | 0.1827 | 0.5377 | 0.5018 | 0.5191 | 0.9487 |
| 0.0629 | 6.0 | 32802 | 0.1874 | 0.5479 | 0.5130 | 0.5299 | 0.9493 |
| 0.0173 | 7.0 | 38269 | 0.2025 | 0.5388 | 0.5323 | 0.5356 | 0.9488 |
| 0.2603 | 8.0 | 43736 | 0.2148 | 0.5437 | 0.5397 | 0.5417 | 0.9493 |
| 0.0378 | 9.0 | 49203 | 0.2323 | 0.5430 | 0.5194 | 0.5310 | 0.9489 |
| 0.031 | 10.0 | 54670 | 0.2328 | 0.5462 | 0.5325 | 0.5393 | 0.9495 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SummerChiam/pond_image_classification_5 | 3c8fc7d61ec95fda512d0acb92fbf77b717778ac | 2022-07-29T07:41:30.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | SummerChiam | null | SummerChiam/pond_image_classification_5 | 4 | null | transformers | 20,524 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_5
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9477040767669678
---
# pond_image_classification_5
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain
 |
HCKLab/BiBert-linear | 62d87d230aec1254a1c6d9320e12f8acedcee5ef | 2022-07-29T08:59:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | HCKLab | null | HCKLab/BiBert-linear | 4 | null | transformers | 20,525 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BiBert-linear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiBert-linear
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6267
- Mse: 1.6267
- Mae: 0.9824
- R2: 0.3044
- Accuracy: 0.3076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.9353 | 1.0 | 625 | 0.7304 | 0.7304 | 0.6695 | 0.3590 | 0.466 |
| 0.6766 | 2.0 | 1250 | 0.7746 | 0.7746 | 0.6779 | 0.3202 | 0.472 |
| 0.5886 | 3.0 | 1875 | 0.7745 | 0.7745 | 0.6712 | 0.3202 | 0.478 |
| 0.377 | 4.0 | 2500 | 0.7687 | 0.7687 | 0.6700 | 0.3254 | 0.472 |
| 0.3075 | 5.0 | 3125 | 0.7973 | 0.7973 | 0.6836 | 0.3003 | 0.467 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
IlyaGusev/roberta-base-informal-tagger | 83664ce01bc1c950956c982ec1ef398cbac2834e | 2022-07-29T13:23:24.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | IlyaGusev | null | IlyaGusev/roberta-base-informal-tagger | 4 | null | transformers | 20,526 | ---
license: apache-2.0
---
|
catasaurus/bart_paraphraser | 407aa958d2804273a14aad1d4b711b14ddc1611d | 2022-07-29T21:17:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | catasaurus | null | catasaurus/bart_paraphraser | 4 | null | transformers | 20,527 | ---
license: apache-2.0
---
|
1757968399/tinybert_4_312_1200 | a096076c46d4781f03ec1d6c1ec2c37e88091648 | 2020-07-27T07:25:03.000Z | [
"pytorch",
"transformers"
] | null | false | 1757968399 | null | 1757968399/tinybert_4_312_1200 | 3 | null | transformers | 20,528 | Entry not found |
ATGdev/DialoGPT-small-harrypotter | 2657935d4bb1c929ea53121b50b35786e10e610c | 2021-10-23T04:38:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ATGdev | null | ATGdev/DialoGPT-small-harrypotter | 3 | null | transformers | 20,529 | ---
tags:
- conversational
---
#Harry Potter DialoGPT Model |
AVeryRealHuman/DialoGPT-small-TonyStark | 58f3a7114d51dfc283d71221fff75563d8eb7444 | 2021-10-08T08:27:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AVeryRealHuman | null | AVeryRealHuman/DialoGPT-small-TonyStark | 3 | null | transformers | 20,530 | ---
tags:
- conversational
---
#Tony Stark DialoGPT model |
Aero/Tsubomi-Haruno | 4addf3eff55db676e4d299df43ffed770d60bf4d | 2021-06-14T22:21:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | Aero | null | Aero/Tsubomi-Haruno | 3 | null | transformers | 20,531 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Tsubomi: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_10 | 4e6166bbb295df51cfb2103d78d62cc591499c6e | 2021-08-04T21:27:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | AethiQs-Max | null | AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_10 | 3 | null | transformers | 20,532 | Entry not found |
Akashpb13/Kabyle_xlsr | 2f17ea3f466eada406e3c5e6d1cedce59bf71162 | 2022-03-24T11:54:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"kab",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sw",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/Kabyle_xlsr | 3 | null | transformers | 20,533 | ---
language:
- kab
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- sw
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/Kabyle_xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: kab
metrics:
- name: Test WER
type: wer
value: 0.3188425282720088
- name: Test CER
type: cer
value: 0.09443079928558358
---
# Akashpb13/Kabyle_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
- Loss: 0.159032
- Wer: 0.187934
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Kabyle train.tsv. Only 50,000 records were sampled randomly and trained due to huge size of dataset.
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 4
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 7.199800 | 3.130564 | 1.000000 |
| 1000 | 1.570200 | 0.718097 | 0.734682 |
| 1500 | 0.850800 | 0.524227 | 0.640532 |
| 2000 | 0.712200 | 0.468694 | 0.603454 |
| 2500 | 0.651200 | 0.413833 | 0.573025 |
| 3000 | 0.603100 | 0.403680 | 0.552847 |
| 3500 | 0.553300 | 0.372638 | 0.541719 |
| 4000 | 0.537200 | 0.353759 | 0.531191 |
| 4500 | 0.506300 | 0.359109 | 0.519601 |
| 5000 | 0.479600 | 0.343937 | 0.511336 |
| 5500 | 0.479800 | 0.338214 | 0.503948 |
| 6000 | 0.449500 | 0.332600 | 0.495221 |
| 6500 | 0.439200 | 0.323905 | 0.492635 |
| 7000 | 0.434900 | 0.310417 | 0.484555 |
| 7500 | 0.403200 | 0.311247 | 0.483262 |
| 8000 | 0.401500 | 0.295637 | 0.476566 |
| 8500 | 0.397000 | 0.301321 | 0.471672 |
| 9000 | 0.371600 | 0.295639 | 0.468440 |
| 9500 | 0.370700 | 0.294039 | 0.468902 |
| 10000 | 0.364900 | 0.291195 | 0.468440 |
| 10500 | 0.348300 | 0.284898 | 0.461098 |
| 11000 | 0.350100 | 0.281764 | 0.459805 |
| 11500 | 0.336900 | 0.291022 | 0.461606 |
| 12000 | 0.330700 | 0.280467 | 0.455234 |
| 12500 | 0.322500 | 0.271714 | 0.452694 |
| 13000 | 0.307400 | 0.289519 | 0.455465 |
| 13500 | 0.309300 | 0.281922 | 0.451217 |
| 14000 | 0.304800 | 0.271514 | 0.452186 |
| 14500 | 0.288100 | 0.286801 | 0.446830 |
| 15000 | 0.293200 | 0.276309 | 0.445399 |
| 15500 | 0.289800 | 0.287188 | 0.446230 |
| 16000 | 0.274800 | 0.286406 | 0.441243 |
| 16500 | 0.271700 | 0.284754 | 0.441520 |
| 17000 | 0.262500 | 0.275431 | 0.442167 |
| 17500 | 0.255500 | 0.276575 | 0.439858 |
| 18000 | 0.260200 | 0.269911 | 0.435425 |
| 18500 | 0.250600 | 0.270519 | 0.434686 |
| 19000 | 0.243300 | 0.267655 | 0.437826 |
| 19500 | 0.240600 | 0.277109 | 0.431731 |
| 20000 | 0.237200 | 0.266622 | 0.433994 |
| 20500 | 0.231300 | 0.273015 | 0.428868 |
| 21000 | 0.227200 | 0.263024 | 0.430161 |
| 21500 | 0.220400 | 0.272880 | 0.429607 |
| 22000 | 0.218600 | 0.272340 | 0.426883 |
| 22500 | 0.213100 | 0.277066 | 0.428407 |
| 23000 | 0.205000 | 0.278404 | 0.424020 |
| 23500 | 0.200900 | 0.270877 | 0.418987 |
| 24000 | 0.199000 | 0.289120 | 0.425821 |
| 24500 | 0.196100 | 0.275831 | 0.424066 |
| 25000 | 0.191100 | 0.282822 | 0.421850 |
| 25500 | 0.190100 | 0.275820 | 0.418248 |
| 26000 | 0.178800 | 0.279208 | 0.419125 |
| 26500 | 0.183100 | 0.271464 | 0.419218 |
| 27000 | 0.177400 | 0.280869 | 0.419680 |
| 27500 | 0.171800 | 0.279593 | 0.414924 |
| 28000 | 0.172900 | 0.276949 | 0.417648 |
| 28500 | 0.164900 | 0.283491 | 0.417786 |
| 29000 | 0.164800 | 0.283122 | 0.416078 |
| 29500 | 0.165500 | 0.281969 | 0.415801 |
| 30000 | 0.163800 | 0.283319 | 0.412753 |
| 30500 | 0.153500 | 0.285702 | 0.414046 |
| 31000 | 0.156500 | 0.285041 | 0.412615 |
| 31500 | 0.150900 | 0.284336 | 0.413723 |
| 32000 | 0.151800 | 0.285922 | 0.412292 |
| 32500 | 0.149200 | 0.289461 | 0.412153 |
| 33000 | 0.145400 | 0.291322 | 0.409567 |
| 33500 | 0.145600 | 0.294361 | 0.409614 |
| 34000 | 0.144200 | 0.290686 | 0.409059 |
| 34500 | 0.143400 | 0.289474 | 0.409844 |
| 35000 | 0.143500 | 0.290340 | 0.408367 |
| 35500 | 0.143200 | 0.289581 | 0.407351 |
| 36000 | 0.138400 | 0.292782 | 0.408736 |
| 36500 | 0.137900 | 0.289108 | 0.408044 |
| 37000 | 0.138200 | 0.292127 | 0.407166 |
| 37500 | 0.134600 | 0.291797 | 0.408413 |
| 38000 | 0.139800 | 0.290056 | 0.408090 |
| 38500 | 0.136500 | 0.291198 | 0.408090 |
| 39000 | 0.137700 | 0.289696 | 0.408044 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Kabyle_xlsr --dataset mozilla-foundation/common_voice_8_0 --config kab --split test
```
|
AkshaySg/langid | a7f26a4d95b41d12803f508fe61cee92d5b691b6 | 2021-11-04T12:38:18.000Z | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
] | audio-classification | false | AkshaySg | null | AkshaySg/langid | 3 | 1 | speechbrain | 20,534 | ---
language: multilingual
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- VoxLingua107
license: "apache-2.0"
datasets:
- VoxLingua107
metrics:
- Accuracy
widget:
- example_title: English Sample
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
---
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
AlekseyKorshuk/comedy-scripts | a3d83cd48b9651ae224485387c09b32f1baa8277 | 2022-02-11T14:58:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/comedy-scripts | 3 | null | transformers | 20,535 | Entry not found |
AlekseyKorshuk/horror-scripts | d81e1c0202fead6525d986f41c86e3802cf42027 | 2022-02-11T16:31:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/horror-scripts | 3 | null | transformers | 20,536 | Entry not found |
AlexN/xls-r-300m-pt | f787452db83cfb074e70189ce068f493ec970692 | 2022-03-24T11:56:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AlexN | null | AlexN/xls-r-300m-pt | 3 | null | transformers | 20,537 | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-300m-pt
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0 fr
type: mozilla-foundation/common_voice_8_0
args: fr
metrics:
- name: Test WER
type: wer
value: 19.361
- name: Test CER
type: cer
value: 5.533
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: fr
metrics:
- name: Validation WER
type: wer
value: 47.812
- name: Validation CER
type: cer
value: 18.805
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 19.36
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 48.01
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 49.21
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2290
- Wer: 0.2382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0952 | 0.64 | 500 | 3.0982 | 1.0 |
| 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 |
| 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 |
| 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 |
| 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 |
| 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 |
| 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 |
| 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 |
| 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 |
| 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 |
| 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 |
| 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 |
| 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 |
| 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 |
| 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 |
| 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 |
| 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 |
| 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 |
| 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 |
| 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 |
| 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 |
| 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 |
| 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
AlgoveraAI/dcgan | 1388d6c35e73398e189b5eb5967022399e39804f | 2022-03-31T18:31:10.000Z | [
"pytorch",
"transformers"
] | null | false | AlgoveraAI | null | AlgoveraAI/dcgan | 3 | 1 | transformers | 20,538 | |
Alireza1044/michael_bert_lm | 9711c0726453982106d91dbd5e8319e70b45fbd9 | 2021-07-08T16:48:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Alireza1044 | null | Alireza1044/michael_bert_lm | 3 | null | transformers | 20,539 | Entry not found |
Aloka/mbart50-ft-si-en | 1c9a9b49487da24bde843d634f8ad81409a8cc20 | 2021-08-29T13:11:14.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | Aloka | null | Aloka/mbart50-ft-si-en | 3 | null | transformers | 20,540 | ---
tags:
- generated_from_trainer
model_index:
- name: mbart50-ft-si-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-ft-si-en
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.98 | 30 | 5.6367 |
| No log | 1.98 | 60 | 4.1221 |
| No log | 2.98 | 90 | 3.1880 |
| No log | 3.98 | 120 | 3.1175 |
| No log | 4.98 | 150 | 3.3575 |
| No log | 5.98 | 180 | 3.7855 |
| No log | 6.98 | 210 | 4.3530 |
| No log | 7.98 | 240 | 4.7216 |
| No log | 8.98 | 270 | 4.9202 |
| No log | 9.98 | 300 | 5.0476 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.6.0
- Datasets 1.11.0
- Tokenizers 0.10.3
|
AndrewMcDowell/wav2vec2-xls-r-1B-german | 6192b9cde45e2aac4ae91d8fba971ef0c94cdb47 | 2022-03-24T11:54:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AndrewMcDowell | null | AndrewMcDowell/wav2vec2-xls-r-1B-german | 3 | null | transformers | 20,541 | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- de
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - German
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: de
metrics:
- name: Test WER
type: wer
value: 15.25
- name: Test CER
type: cer
value: 3.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: de
metrics:
- name: Test WER
type: wer
value: 35.29
- name: Test CER
type: cer
value: 13.83
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: de
metrics:
- name: Test WER
type: wer
value: 36.2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- Wer: 0.1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0826 | 0.07 | 1000 | 0.4637 | 0.4654 |
| 1.118 | 0.15 | 2000 | 0.2595 | 0.2687 |
| 1.1268 | 0.22 | 3000 | 0.2635 | 0.2661 |
| 1.0919 | 0.29 | 4000 | 0.2417 | 0.2566 |
| 1.1013 | 0.37 | 5000 | 0.2414 | 0.2567 |
| 1.0898 | 0.44 | 6000 | 0.2546 | 0.2731 |
| 1.0808 | 0.51 | 7000 | 0.2399 | 0.2535 |
| 1.0719 | 0.59 | 8000 | 0.2353 | 0.2528 |
| 1.0446 | 0.66 | 9000 | 0.2427 | 0.2545 |
| 1.0347 | 0.73 | 10000 | 0.2266 | 0.2402 |
| 1.0457 | 0.81 | 11000 | 0.2290 | 0.2448 |
| 1.0124 | 0.88 | 12000 | 0.2295 | 0.2448 |
| 1.025 | 0.95 | 13000 | 0.2138 | 0.2345 |
| 1.0107 | 1.03 | 14000 | 0.2108 | 0.2294 |
| 0.9758 | 1.1 | 15000 | 0.2019 | 0.2204 |
| 0.9547 | 1.17 | 16000 | 0.2000 | 0.2178 |
| 0.986 | 1.25 | 17000 | 0.2018 | 0.2200 |
| 0.9588 | 1.32 | 18000 | 0.1992 | 0.2138 |
| 0.9413 | 1.39 | 19000 | 0.1898 | 0.2049 |
| 0.9339 | 1.47 | 20000 | 0.1874 | 0.2056 |
| 0.9268 | 1.54 | 21000 | 0.1797 | 0.1976 |
| 0.9194 | 1.61 | 22000 | 0.1743 | 0.1905 |
| 0.8987 | 1.69 | 23000 | 0.1738 | 0.1932 |
| 0.8884 | 1.76 | 24000 | 0.1703 | 0.1873 |
| 0.8939 | 1.83 | 25000 | 0.1633 | 0.1831 |
| 0.8629 | 1.91 | 26000 | 0.1549 | 0.1750 |
| 0.8607 | 1.98 | 27000 | 0.1550 | 0.1738 |
| 0.8316 | 2.05 | 28000 | 0.1512 | 0.1709 |
| 0.8321 | 2.13 | 29000 | 0.1481 | 0.1657 |
| 0.825 | 2.2 | 30000 | 0.1446 | 0.1627 |
| 0.8115 | 2.27 | 31000 | 0.1396 | 0.1583 |
| 0.7959 | 2.35 | 32000 | 0.1389 | 0.1569 |
| 0.7835 | 2.42 | 33000 | 0.1362 | 0.1545 |
| 0.7959 | 2.49 | 34000 | 0.1355 | 0.1531 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test --log_outputs
```
2. To evaluate on test dev data
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` |
Andrija/SRoBERTa-L-NER | cca1159f1ff648df6a6ea209783336be7566e8d4 | 2021-08-10T11:33:31.000Z | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Andrija | null | Andrija/SRoBERTa-L-NER | 3 | null | transformers | 20,542 | ---
datasets:
- hr500k
language:
- hr
- sr
widget:
- text: "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"
license: apache-2.0
---
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a personโs name right after another personโs name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Personโs name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location |
Andrija/SRoBERTa-NER | b0c89a32ec26904ebf4f2d3c3b5b9ea2727927ac | 2021-08-10T11:36:14.000Z | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Andrija | null | Andrija/SRoBERTa-NER | 3 | null | transformers | 20,543 | ---
datasets:
- hr500k
language:
- hr
- sr
widget:
- text: "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"
license: apache-2.0
---
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a personโs name right after another personโs name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Personโs name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location |
Andrija/SRoBERTa-XL-NER | 1ac5a1da731f3da934df007bdc3d403e883f973c | 2021-10-02T20:06:53.000Z | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Andrija | null | Andrija/SRoBERTa-XL-NER | 3 | null | transformers | 20,544 | ---
datasets:
- hr500k
language:
- hr
- sr
widget:
- text: "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"
license: apache-2.0
---
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person's name right after another person's name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Person's name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location |
Andrija/SRoBERTa-base-NER | a94450b3d1b9ed8d4fcc9083e51bd40c106cebfe | 2021-08-10T11:34:53.000Z | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | Andrija | null | Andrija/SRoBERTa-base-NER | 3 | null | transformers | 20,545 | ---
datasets:
- hr500k
language:
- hr
- sr
widget:
- text: "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"
license: apache-2.0
---
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a personโs name right after another personโs name
B-DERIV-PER| Begginning derivative that describes relation to a person
I-PER |Personโs name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location |
AndyJ/clinicalBERT | c58a66c0c4c0de193fb4deabebc3f86a4e641d90 | 2022-01-30T10:10:47.000Z | [
"pytorch",
"transformers"
] | null | false | AndyJ | null | AndyJ/clinicalBERT | 3 | null | transformers | 20,546 | Entry not found |
AnonymousNLP/pretrained-model-1 | 1e6c5d46ad6b582ec5721cfc9c7d1aa82863ca12 | 2021-05-21T09:27:54.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | AnonymousNLP | null | AnonymousNLP/pretrained-model-1 | 3 | null | transformers | 20,547 | Entry not found |
AnonymousNLP/pretrained-model-2 | 9283e412d04f72d598dbf5d976dbe8c75108d74c | 2021-05-21T09:28:24.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | AnonymousNLP | null | AnonymousNLP/pretrained-model-2 | 3 | null | transformers | 20,548 | Entry not found |
AnonymousSub/AR_rule_based_only_classfn_epochs_1_shard_1 | b724746a40834c06050f53a7c659453551e14192 | 2022-01-11T00:40:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_only_classfn_epochs_1_shard_1 | 3 | null | transformers | 20,549 | Entry not found |
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | 0a5c959b893e48529075c34f23c37233f219dcfc | 2022-01-06T09:43:21.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | 3 | null | transformers | 20,550 | Entry not found |
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | 026d9359bfe0301a77025082b4c0cbfdfe8e4b49 | 2022-01-06T20:49:11.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | 3 | null | transformers | 20,551 | Entry not found |
AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1 | f18032e8cef903f3f2ea424825423a30fe5e48db | 2022-01-11T01:15:40.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/AR_rule_based_twostage_quadruplet_epochs_1_shard_1 | 3 | null | transformers | 20,552 | Entry not found |
AnonymousSub/SR_bert-base-uncased | e4f2dfbbbd87d8809e71e410fded1d51b6fd3390 | 2022-01-12T11:16:10.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_bert-base-uncased | 3 | null | transformers | 20,553 | Entry not found |
AnonymousSub/SR_rule_based_bert_quadruplet_epochs_1_shard_1 | 7a2744a5d90cd71e7c4309a5e34d44f18ff058ba | 2022-01-10T22:50:22.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_bert_quadruplet_epochs_1_shard_1 | 3 | null | transformers | 20,554 | Entry not found |
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | 43536dd0619444526619db17c7659f05c295c5a7 | 2022-01-06T08:38:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | 3 | null | transformers | 20,555 | Entry not found |
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | d99e3ad881452861c2f68f4d4f7719e59443672a | 2022-01-12T08:52:23.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | 3 | null | transformers | 20,556 | Entry not found |
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | 356e4f56161ba16b94ad86e59a8b7c569d503667 | 2022-01-06T06:29:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1 | 3 | null | transformers | 20,557 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | bfd85dcf2de24d9ecfc316e06a63de31272f45f4 | 2022-01-06T05:19:50.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | 3 | null | transformers | 20,558 | Entry not found |
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 5dc51cfafdd1ca0b2caa7000d597197856d3cded | 2022-01-06T08:23:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | 3 | null | transformers | 20,559 | Entry not found |
AnonymousSub/SR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 3c863e43f9d95cb710a50e9246a621bde6c89be9 | 2022-01-11T02:09:01.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/SR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 3 | null | transformers | 20,560 | Entry not found |
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10 | 6822165cdff217e646689f63884e6a9a7033c95e | 2022-01-04T08:13:26.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10 | 3 | null | transformers | 20,561 | Entry not found |
AnonymousSub/bert_triplet_epochs_1_shard_1 | c7effc08a352fd832c9412cd54565aef2bb2601c | 2021-12-22T16:55:05.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/bert_triplet_epochs_1_shard_1 | 3 | null | transformers | 20,562 | Entry not found |
AnonymousSub/cline-emanuals-techqa | 9422325cc785f432d42ae654a91b4d5fcd7cae26 | 2021-09-30T18:59:08.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/cline-emanuals-techqa | 3 | null | transformers | 20,563 | Entry not found |
AnonymousSub/cline_squad2.0 | 0043ed62e5c685293b40341f6abffe3d7ca617a1 | 2022-01-17T20:36:27.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/cline_squad2.0 | 3 | null | transformers | 20,564 | Entry not found |
AnonymousSub/declutr-techqa | 7f79f661cd6a931cb45653914e8fb580b0362bef | 2021-09-30T06:26:37.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/declutr-techqa | 3 | null | transformers | 20,565 | Entry not found |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa_copy | e8b69e9eb168792ed316faa4815ec9725679b9d2 | 2022-01-23T17:35:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa_copy | 3 | null | transformers | 20,566 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_10 | 9e4d1186bae21d972ef79dcf572e05c1edf3a940 | 2022-01-04T08:19:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_10 | 3 | null | transformers | 20,567 | Entry not found |
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | e66aef1b80f2177866c26ec95b44c41fbc092693 | 2022-01-19T00:05:51.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | 3 | null | transformers | 20,568 | Entry not found |
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | 275595713d71dd1bd88bb36a4bb40959cd3a5ab5 | 2022-01-10T21:09:04.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | 3 | null | transformers | 20,569 | Entry not found |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1 | d2f2d0b923ff2f413759715e8b0b80d47343fbc4 | 2022-01-04T22:04:59.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1 | 3 | null | transformers | 20,570 | Entry not found |
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_squad2.0 | ae4bde01d4c7ee4d660cbe98288861aaf78d8252 | 2022-01-18T05:22:51.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_squad2.0 | 3 | null | transformers | 20,571 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | db135e350545eae658cb3c03386f609393954335 | 2022-01-05T10:18:46.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1 | 3 | null | transformers | 20,572 | Entry not found |
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1 | 3dbb4a5ea0d111541008d74dac3a18056e150f4c | 2022-01-10T21:11:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1 | 3 | null | transformers | 20,573 | Entry not found |
AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 0c5538f8f8663e51012e93aa656a8c48e5723454 | 2022-01-10T21:09:40.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_twostagequadruplet_hier_epochs_1_shard_1 | 3 | null | transformers | 20,574 | Entry not found |
AnonymousSub/specter-emanuals-model | 4a738fb45f8e337b7e3c26bcf1ef230cf2c34430 | 2021-11-05T10:43:52.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/specter-emanuals-model | 3 | null | transformers | 20,575 | Entry not found |
AnonymousSub/unsup-consert-emanuals | 100c5ff5a40b9a0b815b022fc9762a55fb8241ff | 2021-10-14T11:46:45.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/unsup-consert-emanuals | 3 | null | transformers | 20,576 | Entry not found |
AnonymousSub/unsup-consert-papers-bert | 7b474ccc8b9ae9af94fb10e389f6672779375680 | 2021-10-24T20:46:22.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/unsup-consert-papers-bert | 3 | null | transformers | 20,577 | Entry not found |
AriakimTaiyo/DialoGPT-small-Kumiko | 11dd5bb922a6946ecf0296b5e52759bd5ea43bb0 | 2022-02-02T23:09:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AriakimTaiyo | null | AriakimTaiyo/DialoGPT-small-Kumiko | 3 | null | transformers | 20,578 | ---
tags:
- conversational
---
# Kumiko DialoGPT Model |
Aspect11/DialoGPT-Medium-LiSBot | 83aef079efcd1411dc551533c834e42d28d615e0 | 2021-07-24T11:44:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Aspect11 | null | Aspect11/DialoGPT-Medium-LiSBot | 3 | null | transformers | 20,579 | ---
tags:
- conversational
---
A discord chatbot trained on the whole LiS script to simulate character speech |
Atampy26/GPT-Glacier | 075c5fd3d4e61b4aa5d381de76bbffa3efd1c3f4 | 2021-06-26T02:35:30.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | Atampy26 | null | Atampy26/GPT-Glacier | 3 | null | transformers | 20,580 | GPT-Glacier, a GPT-Neo 125M model finetuned on the Glacier2 Modding Discord server. |
Ayran/DialoGPT-small-harry-potter-1-through-3 | 975baef39279c5f9762cf53532ae76f25708fec5 | 2021-10-12T12:14:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ayran | null | Ayran/DialoGPT-small-harry-potter-1-through-3 | 3 | null | transformers | 20,581 | ---
tags:
- conversational
---
# Harry Potter DialoGPT small Model (Movies 1 through 3) |
AyushPJ/ai-club-inductions-21-nlp-XLNet | d9026ad0159253bbc2d95378009e6a629a007960 | 2021-10-20T23:09:21.000Z | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-XLNet | 3 | null | transformers | 20,582 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-XLNet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-XLNet
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Azaghast/GPT2-SCP-Miscellaneous | e6c1d52af5b7207a1dfa94a8c800f478ee158a80 | 2021-08-25T08:59:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Azaghast | null | Azaghast/GPT2-SCP-Miscellaneous | 3 | null | transformers | 20,583 | Entry not found |
BSen/wav2vec2-large-xls-r-300m-turkish-colab | adfce08108c85fa24dc978dfba32d2c2c5085303 | 2021-12-01T10:18:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | BSen | null | BSen/wav2vec2-large-xls-r-300m-turkish-colab | 3 | null | transformers | 20,584 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Baybars/wav2vec2-xls-r-1b-turkish | c05579f443d94ba1a0e0b03e202fdaba3ab83eb8 | 2022-02-03T10:09:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Baybars | null | Baybars/wav2vec2-xls-r-1b-turkish | 3 | null | transformers | 20,585 | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-10500](https://huggingface.co/./checkpoint-10500) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7540
- Wer: 0.4647
- Cer: 0.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.999,0.9999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:------:|:---------------:|:------:|
| 1.0779 | 4.59 | 500 | 0.2354 | 0.8260 | 0.7395 |
| 0.7573 | 9.17 | 1000 | 0.2100 | 0.7544 | 0.6960 |
| 0.8225 | 13.76 | 1500 | 0.2021 | 0.6867 | 0.6672 |
| 0.621 | 18.35 | 2000 | 0.1874 | 0.6824 | 0.6209 |
| 0.6362 | 22.94 | 2500 | 0.1904 | 0.6712 | 0.6286 |
| 0.624 | 27.52 | 3000 | 0.1820 | 0.6940 | 0.6116 |
| 0.4781 | 32.11 | 3500 | 0.1735 | 0.6966 | 0.5989 |
| 0.5685 | 36.7 | 4000 | 0.1769 | 0.6742 | 0.5971 |
| 0.4384 | 41.28 | 4500 | 0.1767 | 0.6904 | 0.5999 |
| 0.5509 | 45.87 | 5000 | 0.1692 | 0.6734 | 0.5641 |
| 0.3665 | 50.46 | 5500 | 0.1680 | 0.7018 | 0.5662 |
| 0.3914 | 55.05 | 6000 | 0.1631 | 0.7121 | 0.5552 |
| 0.2467 | 59.63 | 6500 | 0.1563 | 0.6657 | 0.5374 |
| 0.2576 | 64.22 | 7000 | 0.1554 | 0.6920 | 0.5316 |
| 0.2711 | 68.81 | 7500 | 0.1495 | 0.6900 | 0.5176 |
| 0.2626 | 73.39 | 8000 | 0.1454 | 0.6843 | 0.5043 |
| 0.1377 | 77.98 | 8500 | 0.1470 | 0.7383 | 0.5101 |
| 0.2005 | 82.57 | 9000 | 0.1430 | 0.7228 | 0.5045 |
| 0.1355 | 87.16 | 9500 | 0.1375 | 0.7231 | 0.4869 |
| 0.0431 | 91.74 | 10000 | 0.1350 | 0.7397 | 0.4749 |
| 0.0586 | 96.33 | 10500 | 0.1339 | 0.7360 | 0.4754 |
| 0.0896 | 100.92 | 11000 | 0.7187 | 0.4885 | 0.1398 |
| 0.183 | 105.5 | 11500 | 0.7310 | 0.4838 | 0.1392 |
| 0.0963 | 110.09 | 12000 | 0.7643 | 0.4759 | 0.1362 |
| 0.0437 | 114.68 | 12500 | 0.7525 | 0.4641 | 0.1328 |
| 0.1122 | 119.27 | 13000 | 0.7535 | 0.4651 | 0.1317 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
BigSalmon/DaBlank | c3027c5e580c2f2fc8c336212e2e392f82ea781d | 2021-06-23T02:17:21.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/DaBlank | 3 | null | transformers | 20,586 | Entry not found |
BigSalmon/TS3 | 21b813aec1e3e4755c4fbe314732ebe6c906b52f | 2021-11-18T04:32:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/TS3 | 3 | null | transformers | 20,587 | Entry not found |
Bimal/my_bot_model | a6137380fd825721c4187a8201a0e03a1cf0c8d2 | 2021-08-28T08:42:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Bimal | null | Bimal/my_bot_model | 3 | null | transformers | 20,588 | ---
tags:
- conversational
---
# Neku from Twewy |
Biniam/en_ti_translate | c32905b9fa1a9cd96735dcd1c6cf48656be5d45b | 2021-08-27T18:25:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | Biniam | null | Biniam/en_ti_translate | 3 | 2 | transformers | 20,589 | ---
tags:
- translation
---
### en_ti_translate
* source languages: en
* target languages: ti
* model: hugging face transformer seq2seq
* base model : opus-mt-en-ti
* pre-processing: normalization + SentencePiece
### documentation
https://tigrinyanlp.github.io/
|
CenIA/albert-tiny-spanish-finetuned-pos | 1e497720e9bbaff9567ccbe2973d16dd5ff54f8d | 2021-12-17T17:56:55.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-pos | 3 | null | transformers | 20,590 | Entry not found |
CenIA/albert-large-spanish | 8740aef10a23ff833c36ed311068bd03adf9ef28 | 2022-04-28T19:55:20.000Z | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | false | CenIA | null | CenIA/albert-large-spanish | 3 | null | transformers | 20,591 | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT Large Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.000625
- Batch Size: 512
- Warmup ratio: 0.003125
- Warmup steps: 12500
- Goal steps: 4000000
- Total steps: 1450000
- Total training time (aprox): 42 days.
## Training loss

|
CenIA/albert-xxlarge-spanish | 65c3d0fcea1a779c827af41032ba1af696ad4a4f | 2022-04-28T19:56:15.000Z | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | false | CenIA | null | CenIA/albert-xxlarge-spanish | 3 | null | transformers | 20,592 | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT XXLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 3125
- Goal steps: 4000000
- Total steps: 1650000
- Total training time (aprox): 70.7 days.
## Training loss
 |
CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa | 343a6918c168f2c79e2e792717ded1880fad310e | 2022-01-21T03:16:45.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa | 3 | null | transformers | 20,593 | Entry not found |
CennetOguz/distilbert-base-uncased-finetuned-recipe | 0eaaf2ddd5253543fb8495fb1ebb4f260bb45c95 | 2022-02-17T21:17:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-recipe | 3 | null | transformers | 20,594 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-recipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Chaewon/mmnt_decoder_en | 5ddfb421c5c654eee2eaa6b19c030f073824ee6f | 2021-12-10T14:41:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Chaewon | null | Chaewon/mmnt_decoder_en | 3 | null | transformers | 20,595 | Entry not found |
Chakita/KROBERT | f625f449a193f80d4c5fc0863b1a012cfc472481 | 2021-09-18T07:55:38.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] | fill-mask | false | Chakita | null | Chakita/KROBERT | 3 | null | transformers | 20,596 | ---
tags:
- masked-lm
- fill-in-the-blanks
---
RoBERTa model trained on Kannada news corpus. |
ComCom/gpt2-medium | 30a7125c51ff3872369d7c0fd06815830d8bd4fa | 2021-11-15T07:08:26.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | ComCom | null | ComCom/gpt2-medium | 3 | null | transformers | 20,597 | ํด๋น ๋ชจ๋ธ์ [ํด๋น ์ฌ์ดํธ](https://huggingface.co/gpt2-medium)์์ ๊ฐ์ ธ์จ ๋ชจ๋ธ์
๋๋ค.
ํด๋น ๋ชจ๋ธ์ [Teachable NLP](https://ainize.ai/teachable-nlp) ์๋น์ค์์ ์ฌ์ฉ๋ฉ๋๋ค.
|
Contrastive-Tension/BERT-Base-NLI-CT | 643434b3007f7b101bf30e353dacd939daa58a0c | 2021-05-18T17:50:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Base-NLI-CT | 3 | null | transformers | 20,598 | Entry not found |
Contrastive-Tension/BERT-Distil-CT | b7e385b9af9f1814a16f1c616864ae2bb2d626ec | 2021-02-10T19:01:42.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Distil-CT | 3 | null | transformers | 20,599 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.