modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
peerapongch/baikal-sentiment | 366e63f2bf9052592a87fe78e4126f3865552ff7 | 2022-04-05T06:08:28.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | peerapongch | null | peerapongch/baikal-sentiment | 2 | null | transformers | 25,400 | Entry not found |
birgermoell/psst-common-voice-new | 83a47933ba9bbbc1b2ddb0899599c9887bc1de25 | 2022-04-05T08:56:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-common-voice-new | 2 | null | transformers | 25,401 | Entry not found |
ramnika003/autotrain-sentiment_analysis_project-705021428 | af31d1c7e777ebf76e872dfb2762163d6a8774dc | 2022-04-05T09:23:07.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:ramnika003/autotrain-data-sentiment_analysis_project",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | ramnika003 | null | ramnika003/autotrain-sentiment_analysis_project-705021428 | 2 | null | transformers | 25,402 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ramnika003/autotrain-data-sentiment_analysis_project
co2_eq_emissions: 10.03748863138583
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 705021428
- CO2 Emissions (in grams): 10.03748863138583
## Validation Metrics
- Loss: 0.5534441471099854
- Accuracy: 0.768964665184087
- Macro F1: 0.7629008163259284
- Micro F1: 0.768964665184087
- Weighted F1: 0.7685397042536148
- Macro Precision: 0.7658234531650739
- Micro Precision: 0.768964665184087
- Weighted Precision: 0.7684017544026074
- Macro Recall: 0.7603505092881394
- Micro Recall: 0.768964665184087
- Weighted Recall: 0.768964665184087
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ramnika003/autotrain-sentiment_analysis_project-705021428
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ramnika003/autotrain-sentiment_analysis_project-705021428", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ramnika003/autotrain-sentiment_analysis_project-705021428", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
impawankr/distilbert-base-uncased-finetuned-imdb | 468f5ee5f777005873b1ab3bcf0b7eb49ab82aef | 2022-04-05T17:09:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | impawankr | null | impawankr/distilbert-base-uncased-finetuned-imdb | 2 | null | transformers | 25,403 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5756 | 2.0 | 314 | 2.4230 |
| 2.5395 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
birgermoell/psst-common-voice-2 | 60f1c173ef1fd450d2ae3b4bc6c191532b0e3de6 | 2022-04-05T13:46:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-common-voice-2 | 2 | null | transformers | 25,404 | Entry not found |
AnonymousSub/fpdm_triplet_roberta_FT_new_newsqa | 9d1564d8d0efe221d5fbc9963f1ae91ba9d32828 | 2022-04-05T14:45:35.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_triplet_roberta_FT_new_newsqa | 2 | null | transformers | 25,405 | Entry not found |
AnonymousSub/news_pretrain_roberta_FT_new_newsqa | 4d6a0b201a597433c7e7362943f3f4281047e371 | 2022-04-05T14:53:47.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/news_pretrain_roberta_FT_new_newsqa | 2 | null | transformers | 25,406 | Entry not found |
AnonymousSub/news_pretrain_bert_FT_new_newsqa | fbd389274c80a24d125021f3c6aa59d44d87a124 | 2022-04-05T14:56:57.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/news_pretrain_bert_FT_new_newsqa | 2 | null | transformers | 25,407 | Entry not found |
AnonymousSub/fpdm_hier_bert_FT_new_newsqa | cc43beea65983a536ea62f348c37c00e0c41a29f | 2022-04-05T15:07:30.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_hier_bert_FT_new_newsqa | 2 | null | transformers | 25,408 | Entry not found |
Harsit/bert-finetuned-squad | 3024e55f46cf7aa0a6a740be6af5b483a2740cda | 2022-04-05T17:57:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Harsit | null | Harsit/bert-finetuned-squad | 2 | 1 | transformers | 25,409 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
CenIA/albert-large-spanish-finetuned-qa-tar | b9548715ff4a2d8f9c6a34c69e1b4e84ae9527d5 | 2022-04-05T16:27:30.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-large-spanish-finetuned-qa-tar | 2 | null | transformers | 25,410 | Entry not found |
moshew/bert-tiny-sst2-distilled | 489495d7e6f86fa03a0af3347187ff402dfee969 | 2022-04-06T04:46:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | moshew | null | moshew/bert-tiny-sst2-distilled | 2 | null | transformers | 25,411 | Entry not found |
deepspeechvision/wav2vec2hindiasr2 | cd41d37fff04d414157911799220475a5e274ba2 | 2022-04-06T17:41:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | deepspeechvision | null | deepspeechvision/wav2vec2hindiasr2 | 2 | null | transformers | 25,412 | Entry not found |
rowan1224/albert-slp | 9412239a1572ad1424b999df06f76b06f14e6734 | 2022-04-05T16:52:26.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rowan1224 | null | rowan1224/albert-slp | 2 | null | transformers | 25,413 | Entry not found |
Bistolero/nl_ge_alltr | 6b612b744f015fbc99485c50e061b88cd2895b3f | 2022-04-05T22:05:38.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/nl_ge_alltr | 2 | null | transformers | 25,414 | Entry not found |
suey2580/distilbert-base-uncased-finetuned-cola | 7be0b3abb3c1e65b7a27cbfcb1624614b97cca64 | 2022-04-06T04:30:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | suey2580 | null | suey2580/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 25,415 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5238347808517775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0766
- Matthews Correlation: 0.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.403175733231667e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4954 | 1.0 | 1069 | 0.4770 | 0.4589 |
| 0.3627 | 2.0 | 2138 | 0.5464 | 0.4998 |
| 0.2576 | 3.0 | 3207 | 0.8439 | 0.4933 |
| 0.1488 | 4.0 | 4276 | 1.0184 | 0.5035 |
| 0.1031 | 5.0 | 5345 | 1.0766 | 0.5238 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
unjustify/autotrain-Create_Question_Model-708521506 | 0761689272d2fda05974ac1de5bcc830ac5fc833 | 2022-04-06T20:45:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:unjustify/autotrain-data-Create_Question_Model",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | unjustify | null | unjustify/autotrain-Create_Question_Model-708521506 | 2 | null | transformers | 25,416 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-Create_Question_Model
co2_eq_emissions: 7.419693550936528
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 708521506
- CO2 Emissions (in grams): 7.419693550936528
## Validation Metrics
- Loss: 1.4744563102722168
- Rouge1: 30.0761
- Rouge2: 10.142
- RougeL: 27.2745
- RougeLsum: 27.2831
- Gen Len: 13.8746
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/unjustify/autotrain-Create_Question_Model-708521506
``` |
hou/opus-tatoeba-en-tr-finetuned-en-to-ug | 81ea97284d0194b1e7fb6be6aa085fb41ba611e5 | 2022-04-06T09:37:14.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hou | null | hou/opus-tatoeba-en-tr-finetuned-en-to-ug | 2 | null | transformers | 25,417 | Entry not found |
birgermoell/psst-fairseq-pitch-shift | 58f28f04c4f02229def6ab4912aed00fb396373d | 2022-04-06T09:11:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | birgermoell | null | birgermoell/psst-fairseq-pitch-shift | 2 | null | transformers | 25,418 | Entry not found |
pinecone/distiluse-podcast-nq | ff08658506569c424e829f64844e6de2d3b97066 | 2022-05-09T22:47:45.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | pinecone | null | pinecone/distiluse-podcast-nq | 2 | 1 | sentence-transformers | 25,419 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# DistilUSE Podcast Natural Questions
This is a [sentence-transformers](https://www.SBERT.net) model built for asymmetric semantic search of Podcast episodes. It replicates the fine-tuning process of Spotify's podcast search model, as [described here](https://www.pinecone.io/learn/spotify-podcast-search/).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["podcast about climate change", "how to make money on the internet"]
model = SentenceTransformer('pinecone/distiluse-podcast-nq')
embeddings = model.encode(sentences)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 3748 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.RerankingEvaluator.RerankingEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 374,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
James Briggs, [How Spotify Uses Semantic Search for Podcasts](https://www.pinecone.io/learn/spotify-podcast-search/), Pinecone
|
Ramansh/RoBERTa-fake-news-detection | de2eeb8d3febe4fa6d5bd74f21b629fff1f6f9a1 | 2022-04-06T16:37:32.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | Ramansh | null | Ramansh/RoBERTa-fake-news-detection | 2 | null | transformers | 25,420 | ---
license: cc-by-nc-sa-4.0
---
A simple fake news detector that utilizes RoBERTa. <br/>
It was fine-tuned on [clmentbisaillon/fake-and-real-news-dataset](https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset) |
millawell/QuBERTa-finetuned-pos | f242718b454c16b15c275051a3f822a42cadc106 | 2022-04-06T16:25:57.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | millawell | null | millawell/QuBERTa-finetuned-pos | 2 | null | transformers | 25,421 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: QuBERTa-finetuned-pos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QuBERTa-finetuned-pos
This model is a fine-tuned version of [Llamacha/QuBERTa](https://huggingface.co/Llamacha/QuBERTa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4249
- Precision: 0.8372
- Recall: 0.8702
- F1: 0.8534
- Accuracy: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 152 | 0.6146 | 0.6876 | 0.7482 | 0.7167 | 0.7360 |
| No log | 2.0 | 304 | 0.4937 | 0.7554 | 0.8041 | 0.7790 | 0.7932 |
| No log | 3.0 | 456 | 0.4525 | 0.7920 | 0.8238 | 0.8076 | 0.8200 |
| 0.5624 | 4.0 | 608 | 0.4294 | 0.8144 | 0.8426 | 0.8283 | 0.8391 |
| 0.5624 | 5.0 | 760 | 0.4245 | 0.8192 | 0.8521 | 0.8353 | 0.8445 |
| 0.5624 | 6.0 | 912 | 0.4357 | 0.8201 | 0.8607 | 0.8399 | 0.8480 |
| 0.3064 | 7.0 | 1064 | 0.4240 | 0.8308 | 0.8694 | 0.8497 | 0.8582 |
| 0.3064 | 8.0 | 1216 | 0.4231 | 0.8406 | 0.8757 | 0.8578 | 0.8653 |
| 0.3064 | 9.0 | 1368 | 0.4202 | 0.8389 | 0.8686 | 0.8535 | 0.8617 |
| 0.2227 | 10.0 | 1520 | 0.4249 | 0.8372 | 0.8702 | 0.8534 | 0.8623 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
neal49/distilbert-sst2-runglue | 0842c0e3bc719136753361c8e6cee5794c492d2e | 2022-04-07T05:05:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | neal49 | null | neal49/distilbert-sst2-runglue | 2 | null | transformers | 25,422 | Entry not found |
srmukundb/bert-base-uncased-finetuned-squad | d71544debf33f0dcc037ca80801122c14651a52b | 2022-05-03T13:54:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | srmukundb | null | srmukundb/bert-base-uncased-finetuned-squad | 2 | null | transformers | 25,423 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0462 | 1.0 | 8235 | 1.0822 |
| 0.7579 | 2.0 | 16470 | 1.1160 |
| 0.5734 | 3.0 | 24705 | 1.2582 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DaeLim/wav2vec2-large-xls-r-300m-turkish-colab | a64a03b9a338af5adf9aeb5c487b9da32e5679fd | 2022-04-07T11:12:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DaeLim | null | DaeLim/wav2vec2-large-xls-r-300m-turkish-colab | 2 | null | transformers | 25,424 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3779
- Wer: 0.3712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0027 | 3.67 | 400 | 0.7121 | 0.7332 |
| 0.4068 | 7.34 | 800 | 0.4146 | 0.4599 |
| 0.1915 | 11.01 | 1200 | 0.4276 | 0.4489 |
| 0.1348 | 14.68 | 1600 | 0.4462 | 0.4388 |
| 0.1057 | 18.35 | 2000 | 0.4153 | 0.4291 |
| 0.0862 | 22.02 | 2400 | 0.3820 | 0.3965 |
| 0.0662 | 25.69 | 2800 | 0.3809 | 0.3792 |
| 0.0548 | 29.36 | 3200 | 0.3779 | 0.3712 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Fredvv/marian-finetuned-kde4-en-to-fr | cc1fa12e056bcd67a4a5ab572d39f5bf223244f9 | 2022-04-07T13:02:53.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | Fredvv | null | Fredvv/marian-finetuned-kde4-en-to-fr | 2 | null | transformers | 25,425 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 53.57438381688707
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8388
- Bleu: 53.5744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
medhabi/distilbert-base-uncased-score-pred | 95d3604511f2556ef9f851ba459872b14e8d0c25 | 2022-04-08T12:41:05.000Z | [
"pytorch",
"text-to-rating",
"transformers"
] | null | false | medhabi | null | medhabi/distilbert-base-uncased-score-pred | 2 | null | transformers | 25,426 | Entry not found |
jfealko/wav2vec2-large-xls-r-300m-russian-colab-beam_search_test | bbdb25b5b3e404e9f6bd5f2e79905cac352d4f65 | 2022-04-07T18:09:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jfealko | null | jfealko/wav2vec2-large-xls-r-300m-russian-colab-beam_search_test | 2 | null | transformers | 25,427 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-russian-colab-beam_search_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-russian-colab-beam_search_test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7619
- Wer: 0.4680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.0158 | 4.16 | 100 | 5.4134 | 1.0 |
| 4.0394 | 8.33 | 200 | 3.4304 | 1.0 |
| 3.2721 | 12.49 | 300 | 3.2273 | 1.0 |
| 3.1277 | 16.66 | 400 | 2.8023 | 0.9984 |
| 1.3791 | 20.82 | 500 | 0.9888 | 0.8546 |
| 0.3659 | 24.99 | 600 | 0.7602 | 0.6304 |
| 0.1858 | 29.16 | 700 | 0.7965 | 0.6156 |
| 0.1403 | 33.33 | 800 | 0.7998 | 0.5839 |
| 0.1173 | 37.49 | 900 | 0.8353 | 0.5941 |
| 0.0917 | 41.66 | 1000 | 0.8272 | 0.5522 |
| 0.0743 | 45.82 | 1100 | 0.8342 | 0.5471 |
| 0.063 | 49.99 | 1200 | 0.7988 | 0.5352 |
| 0.0528 | 54.16 | 1300 | 0.7740 | 0.5201 |
| 0.0456 | 58.33 | 1400 | 0.7636 | 0.5165 |
| 0.0389 | 62.49 | 1500 | 0.7922 | 0.5161 |
| 0.0329 | 66.66 | 1600 | 0.8035 | 0.5158 |
| 0.0283 | 70.82 | 1700 | 0.7873 | 0.4832 |
| 0.0255 | 74.99 | 1800 | 0.7853 | 0.4870 |
| 0.0236 | 79.16 | 1900 | 0.8236 | 0.5045 |
| 0.0202 | 83.33 | 2000 | 0.7661 | 0.4796 |
| 0.0165 | 87.49 | 2100 | 0.7584 | 0.4680 |
| 0.0156 | 91.66 | 2200 | 0.7685 | 0.4772 |
| 0.0149 | 95.82 | 2300 | 0.7519 | 0.4696 |
| 0.0126 | 99.99 | 2400 | 0.7619 | 0.4680 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
philschmid/bert-large-cased-whole-word-masking-sst2 | 5c930994422145867b5768ba7113a2aa330b19d2 | 2022-04-07T14:27:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | philschmid | null | philschmid/bert-large-cased-whole-word-masking-sst2 | 2 | null | transformers | 25,428 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-whole-word-masking-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9438073394495413
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-whole-word-masking-sst2
This model is a fine-tuned version of [bert-large-cased-whole-word-masking](https://huggingface.co/bert-large-cased-whole-word-masking) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1725
- Accuracy: 0.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
johnpaulbin/skript-1m-gpt-neo125m | c50b71cf7058012a927a5079a10e39a51ce0ee37 | 2022-04-07T14:31:22.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | johnpaulbin | null | johnpaulbin/skript-1m-gpt-neo125m | 2 | null | transformers | 25,429 | Entry not found |
sgugger/test-bert-sharded | f0e37cd64f42e16f9c24532cd0d53bd2ed1c9f1a | 2022-04-07T17:08:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sgugger | null | sgugger/test-bert-sharded | 2 | null | transformers | 25,430 | Entry not found |
mp6kv/ACTS_feedback1 | b08c01aef134475f9f49436e0cdee4ee72ade0fa | 2022-04-07T19:30:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mp6kv | null | mp6kv/ACTS_feedback1 | 2 | null | transformers | 25,431 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ACTS_feedback1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ACTS_feedback1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2357
- Accuracy: 0.8936
- Balanced accuracy: 0.8897
- Precision: 0.8951
- Recall: 0.8936
- F1: 0.8915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:---------:|:------:|:------:|
| 1.0881 | 1.0 | 12 | 1.0513 | 0.5532 | 0.5119 | 0.4004 | 0.5532 | 0.4645 |
| 0.9933 | 2.0 | 24 | 0.9257 | 0.5319 | 0.4952 | 0.3852 | 0.5319 | 0.4463 |
| 0.8065 | 3.0 | 36 | 0.7059 | 0.7234 | 0.7295 | 0.7607 | 0.7234 | 0.7184 |
| 0.5504 | 4.0 | 48 | 0.4259 | 0.8511 | 0.8474 | 0.8486 | 0.8511 | 0.8472 |
| 0.3262 | 5.0 | 60 | 0.3703 | 0.8511 | 0.8654 | 0.8624 | 0.8511 | 0.8499 |
| 0.1877 | 6.0 | 72 | 0.2518 | 0.8723 | 0.8731 | 0.8719 | 0.8723 | 0.8703 |
| 0.1094 | 7.0 | 84 | 0.2283 | 0.9362 | 0.9410 | 0.9415 | 0.9362 | 0.9365 |
| 0.0721 | 8.0 | 96 | 0.2246 | 0.9149 | 0.9244 | 0.9233 | 0.9149 | 0.9149 |
| 0.0521 | 9.0 | 108 | 0.2215 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 |
| 0.0455 | 10.0 | 120 | 0.2357 | 0.8936 | 0.8897 | 0.8951 | 0.8936 | 0.8915 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ali-issa/wav2vec2-Arabizi-100-epoch | 3f6c64797edf8c7dd2a9a9a05a04919a27f52d65 | 2022-04-07T23:10:48.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ali-issa | null | ali-issa/wav2vec2-Arabizi-100-epoch | 2 | null | transformers | 25,432 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-Arabizi-100-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-Arabizi-100-epoch
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4189
- Wer: 0.8732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8326 | 19.97 | 300 | 1.7665 | 0.9971 |
| 0.3734 | 39.97 | 600 | 2.0734 | 0.9193 |
| 0.1832 | 59.97 | 900 | 2.2837 | 0.9049 |
| 0.1116 | 79.97 | 1200 | 2.3697 | 0.8818 |
| 0.063 | 99.97 | 1500 | 2.4189 | 0.8732 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Splend1dchan/canine-s-squad | f59594f9476ad15513d6a78af4b3153f4613e5df | 2022-04-12T12:07:01.000Z | [
"pytorch",
"canine",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Splend1dchan | null | Splend1dchan/canine-s-squad | 2 | null | transformers | 25,433 | python run_squad.py
--model_name_or_path google/canine-s
--do_train
--do_eval
--per_gpu_train_batch_size 1
--per_gpu_eval_batch_size 1
--gradient_accumulation_steps 128
--learning_rate 3e-5
--num_train_epochs 3
--max_seq_length 1024
--doc_stride 128
--max_answer_length 240
--output_dir canine-s-squad
--model_type bert
{
"_name_or_path": "google/canine-s",
"architectures": [
"CanineForQuestionAnswering"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 57344,
"downsampling_rate": 4,
"eos_token_id": 57345,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"local_transformer_stride": 128,
"max_position_embeddings": 16384,
"model_type": "canine",
"num_attention_heads": 12,
"num_hash_buckets": 16384,
"num_hash_functions": 8,
"num_hidden_layers": 12,
"pad_token_id": 0,
"torch_dtype": "float32",
"transformers_version": "4.19.0.dev0",
"type_vocab_size": 16,
"upsampling_kernel_size": 4,
"use_cache": true
}
{'exact': 64.70198675496688, 'f1': 76.57594921776277} |
cj-mills/xlm-roberta-base-finetuned-panx-fr | 6e126102d5d0bc69a611fd01380ea7b5139efa3c | 2022-04-08T01:49:11.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | cj-mills | null | cj-mills/xlm-roberta-base-finetuned-panx-fr | 2 | null | transformers | 25,434 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8293418187908222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2719
- F1: 0.8293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8541 | 1.0 | 72 | 0.3529 | 0.7826 |
| 0.3069 | 2.0 | 144 | 0.2807 | 0.8154 |
| 0.2262 | 3.0 | 216 | 0.2719 | 0.8293 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cj-mills/xlm-roberta-base-finetuned-panx-it | 2d1217c5de60b43ec82d3cea94bda7efd724fed1 | 2022-04-08T01:56:39.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | cj-mills | null | cj-mills/xlm-roberta-base-finetuned-panx-it | 2 | null | transformers | 25,435 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.7730210016155089
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2928
- F1: 0.7730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4548 | 1.0 | 27 | 0.6522 | 0.5457 |
| 0.5214 | 2.0 | 54 | 0.3476 | 0.7404 |
| 0.3186 | 3.0 | 81 | 0.2928 | 0.7730 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
junnyu/flash_base_wwm_cluecorpussmall | 1be1b50823aee96ec03d50efa36bc970e88baf80 | 2022-04-08T04:11:25.000Z | [
"pytorch",
"flash",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/flash_base_wwm_cluecorpussmall | 2 | null | transformers | 25,436 | ---
license: mit
inference: False
---
# PS: 效果不怎么好,体验一下就行了。。。。。。wwm-MLM最终准确率55.5左右。
# cluner NER实验(globalpointer的结果差不多,softmax结果差好多- -)
```python
# flash base + globalpointer
04/08/2022 10:53:34 - INFO - __main__ - ADDRESS = Score(f1=0.607703, precision=0.64939, recall=0.571046, tp=213, pred=328, gold=373)
04/08/2022 10:53:34 - INFO - __main__ - BOOK = Score(f1=0.8125, precision=0.873134, recall=0.75974, tp=117, pred=134, gold=154)
04/08/2022 10:53:34 - INFO - __main__ - COMPANY = Score(f1=0.818304, precision=0.832877, recall=0.804233, tp=304, pred=365, gold=378)
04/08/2022 10:53:34 - INFO - __main__ - GAME = Score(f1=0.854305, precision=0.834951, recall=0.874576, tp=258, pred=309, gold=295)
04/08/2022 10:53:34 - INFO - __main__ - GOVERNMENT = Score(f1=0.823529, precision=0.775, recall=0.878543, tp=217, pred=280, gold=247)
04/08/2022 10:53:34 - INFO - __main__ - MOVIE = Score(f1=0.810997, precision=0.842857, recall=0.781457, tp=118, pred=140, gold=151)
04/08/2022 10:53:34 - INFO - __main__ - NAME = Score(f1=0.874042, precision=0.890625, recall=0.858065, tp=399, pred=448, gold=465)
04/08/2022 10:53:34 - INFO - __main__ - ORGANIZATION = Score(f1=0.813986, precision=0.836207, recall=0.792916, tp=291, pred=348, gold=367)
04/08/2022 10:53:34 - INFO - __main__ - POSITION = Score(f1=0.78478, precision=0.808824, recall=0.762125, tp=330, pred=408, gold=433)
04/08/2022 10:53:34 - INFO - __main__ - SCENE = Score(f1=0.683805, precision=0.738889, recall=0.636364, tp=133, pred=180, gold=209)
04/08/2022 10:53:34 - INFO - __main__ - micro_f1 = Score(f1=0.79175, precision=0.809524, recall=0.77474, tp=2380, pred=2940, gold=3072)
04/08/2022 10:53:34 - INFO - __main__ - macro_f1 = Score(f1=0.788395, precision=0.808275, recall=0.771906, tp=0, pred=0, gold=0)
04/08/2022 10:53:34 - INFO - __main__ - mean_f1 = 0.790072
# flash base + softmax
04/08/2022 11:10:44 - INFO - __main__ - ADDRESS = Score(f1=0.568987, precision=0.522422, recall=0.624665, tp=233, pred=446, gold=373)
04/08/2022 11:10:44 - INFO - __main__ - BOOK = Score(f1=0.750789, precision=0.730061, recall=0.772727, tp=119, pred=163, gold=154)
04/08/2022 11:10:44 - INFO - __main__ - COMPANY = Score(f1=0.75528, precision=0.711944, recall=0.804233, tp=304, pred=427, gold=378)
04/08/2022 11:10:44 - INFO - __main__ - GAME = Score(f1=0.811502, precision=0.767372, recall=0.861017, tp=254, pred=331, gold=295)
04/08/2022 11:10:44 - INFO - __main__ - GOVERNMENT = Score(f1=0.738636, precision=0.69395, recall=0.789474, tp=195, pred=281, gold=247)
04/08/2022 11:10:44 - INFO - __main__ - MOVIE = Score(f1=0.74359, precision=0.720497, recall=0.768212, tp=116, pred=161, gold=151)
04/08/2022 11:10:44 - INFO - __main__ - NAME = Score(f1=0.831967, precision=0.794521, recall=0.873118, tp=406, pred=511, gold=465)
04/08/2022 11:10:44 - INFO - __main__ - ORGANIZATION = Score(f1=0.754054, precision=0.747989, recall=0.760218, tp=279, pred=373, gold=367)
04/08/2022 11:10:44 - INFO - __main__ - POSITION = Score(f1=0.742729, precision=0.720174, recall=0.766744, tp=332, pred=461, gold=433)
04/08/2022 11:10:44 - INFO - __main__ - SCENE = Score(f1=0.628842, precision=0.621495, recall=0.636364, tp=133, pred=214, gold=209)
04/08/2022 11:10:44 - INFO - __main__ - micro_f1 = Score(f1=0.736335, precision=0.703979, recall=0.77181, tp=2371, pred=3368, gold=3072)
04/08/2022 11:10:44 - INFO - __main__ - macro_f1 = Score(f1=0.732638, precision=0.703043, recall=0.765677, tp=0, pred=0, gold=0)
04/08/2022 11:10:44 - INFO - __main__ - mean_f1 = 0.734486
# bert base + globalpointer
04/08/2022 11:22:48 - INFO - __main__ - ADDRESS = Score(f1=0.641558, precision=0.622166, recall=0.662198, tp=247, pred=397, gold=373)
04/08/2022 11:22:48 - INFO - __main__ - BOOK = Score(f1=0.813115, precision=0.821192, recall=0.805195, tp=124, pred=151, gold=154)
04/08/2022 11:22:48 - INFO - __main__ - COMPANY = Score(f1=0.823684, precision=0.819372, recall=0.828042, tp=313, pred=382, gold=378)
04/08/2022 11:22:48 - INFO - __main__ - GAME = Score(f1=0.841762, precision=0.811321, recall=0.874576, tp=258, pred=318, gold=295)
04/08/2022 11:22:48 - INFO - __main__ - GOVERNMENT = Score(f1=0.827324, precision=0.778571, recall=0.882591, tp=218, pred=280, gold=247)
04/08/2022 11:22:48 - INFO - __main__ - MOVIE = Score(f1=0.82392, precision=0.826667, recall=0.821192, tp=124, pred=150, gold=151)
04/08/2022 11:22:48 - INFO - __main__ - NAME = Score(f1=0.861345, precision=0.840164, recall=0.883621, tp=410, pred=488, gold=464)
04/08/2022 11:22:48 - INFO - __main__ - ORGANIZATION = Score(f1=0.804911, precision=0.806011, recall=0.803815, tp=295, pred=366, gold=367)
04/08/2022 11:22:48 - INFO - __main__ - POSITION = Score(f1=0.805046, precision=0.799544, recall=0.810624, tp=351, pred=439, gold=433)
04/08/2022 11:22:48 - INFO - __main__ - SCENE = Score(f1=0.702703, precision=0.722222, recall=0.684211, tp=143, pred=198, gold=209)
04/08/2022 11:22:48 - INFO - __main__ - micro_f1 = Score(f1=0.795833, precision=0.783528, recall=0.808531, tp=2483, pred=3169, gold=3071)
04/08/2022 11:22:48 - INFO - __main__ - macro_f1 = Score(f1=0.794537, precision=0.784723, recall=0.805606, tp=0, pred=0, gold=0)
04/08/2022 11:22:48 - INFO - __main__ - mean_f1 = 0.795185
```
# cmeee + globalpointer
```python
04/08/2022 11:50:41 - INFO - __main__ - bod = Score(f1=0.639522, precision=0.642318, recall=0.63675, tp=3746, pred=5832, gold=5883)
04/08/2022 11:50:41 - INFO - __main__ - dep = Score(f1=0.473988, precision=0.650794, recall=0.372727, tp=41, pred=63, gold=110)
04/08/2022 11:50:41 - INFO - __main__ - dis = Score(f1=0.716959, precision=0.704479, recall=0.729889, tp=3602, pred=5113, gold=4935)
04/08/2022 11:50:41 - INFO - __main__ - dru = Score(f1=0.756328, precision=0.829329, recall=0.695139, tp=1001, pred=1207, gold=1440)
04/08/2022 11:50:41 - INFO - __main__ - equ = Score(f1=0.518703, precision=0.638037, recall=0.436975, tp=104, pred=163, gold=238)
04/08/2022 11:50:41 - INFO - __main__ - ite = Score(f1=0.322533, precision=0.503448, recall=0.23727, tp=219, pred=435, gold=923)
04/08/2022 11:50:41 - INFO - __main__ - mic = Score(f1=0.746967, precision=0.75614, recall=0.738014, tp=431, pred=570, gold=584)
04/08/2022 11:50:41 - INFO - __main__ - pro = Score(f1=0.611138, precision=0.614138, recall=0.608167, tp=1251, pred=2037, gold=2057)
04/08/2022 11:50:41 - INFO - __main__ - sym = Score(f1=0.47969, precision=0.495738, recall=0.464649, tp=1919, pred=3871, gold=4130)
04/08/2022 11:50:41 - INFO - __main__ - micro_f1 = Score(f1=0.622061, precision=0.638329, recall=0.606601, tp=12314, pred=19291, gold=20300)
04/08/2022 11:50:41 - INFO - __main__ - macro_f1 = Score(f1=0.585092, precision=0.648269, recall=0.54662, tp=0, pred=0, gold=0)
04/08/2022 11:50:41 - INFO - __main__ - mean_f1 = 0.603576
```
# install
- https://github.com/JunnYu/FLASHQuad_pytorch
# usage
```python
import torch
from flash import FLASHForMaskedLM
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall")
model = FLASHForMaskedLM.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall")
model.eval()
text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!"
inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False) #这里必须是512,不然结果可能不对。
with torch.no_grad():
pt_outputs = model(**inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
val,idx = pt_outputs[i].softmax(-1).topk(k=5)
tokens = tokenizer.convert_ids_to_tokens(idx)
new_tokens = []
for v,t in zip(val.cpu(),tokens):
new_tokens.append(f"{t}+{round(v.item(),4)}")
pt_outputs_sentence += "[" + "||".join(new_tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 天气预报说今天的天[气+0.994||天+0.0015||空+0.0014||晴+0.0005||阳+0.0003]很好,那么我[们+0.9563||就+0.0381||也+0.0032||俩+0.0004||来+0.0002]一起去公园玩吧!
``` |
Pavithra/codeparrot-ds-sample-gpt-small-neo-10epoch1 | 4160106bb9076f24af963764da3eee844aeea353 | 2022-04-08T17:27:27.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-sample-gpt-small-neo-10epoch1 | 2 | null | transformers | 25,437 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-gpt-small-neo-10epoch1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-neo-10epoch1
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.5639 | 0.94 | 1000 | 2.9253 |
| 2.3253 | 1.88 | 2000 | 2.4563 |
| 1.8494 | 2.82 | 3000 | 2.2655 |
| 1.5133 | 3.77 | 4000 | 2.1635 |
| 1.249 | 4.71 | 5000 | 2.1414 |
| 1.0194 | 5.65 | 6000 | 2.1818 |
| 0.7999 | 6.59 | 7000 | 2.2738 |
| 0.5971 | 7.53 | 8000 | 2.3910 |
| 0.4238 | 8.47 | 9000 | 2.5062 |
| 0.3107 | 9.42 | 10000 | 2.5696 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ybelkada/focusondepth | ebf1a412d2be5e365acb92050329c47ab597d444 | 2022-04-08T13:11:47.000Z | [
"pytorch",
"focusondepth",
"transformers"
] | null | false | ybelkada | null | ybelkada/focusondepth | 2 | null | transformers | 25,438 | Entry not found |
ydshieh/tiny-random-gptj-base | ec02b56fe405ef16588efeaadafa5b8c9456a725 | 2022-04-08T10:20:31.000Z | [
"pytorch",
"tf",
"gptj",
"feature-extraction",
"transformers"
] | feature-extraction | false | ydshieh | null | ydshieh/tiny-random-gptj-base | 2 | null | transformers | 25,439 | Entry not found |
jesperjmb/parlaBERT | 2bf5aa87a29921c72f140abc62f76d93566132a0 | 2022-04-08T12:43:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jesperjmb | null | jesperjmb/parlaBERT | 2 | null | transformers | 25,440 | Fine-tuned KB-BERT for Swedish Riksdag introductions |
philschmid/MiniLMv2-L12-H384-sst2 | ab98890f9b87c74760d4c3fbd97866bc00656a5c | 2022-04-08T13:41:34.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/MiniLMv2-L12-H384-sst2 | 2 | null | transformers | 25,441 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9208715596330275
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5576 | 1.0 | 264 | 0.2690 | 0.8979 |
| 0.2854 | 2.0 | 528 | 0.2077 | 0.9117 |
| 0.2158 | 3.0 | 792 | 0.2195 | 0.9209 |
| 0.1789 | 4.0 | 1056 | 0.2260 | 0.9163 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
philschmid/MiniLMv2-L6-H384-sst2 | edb0d6ee4f33670f0b9b934806c8132bd397ef90 | 2022-04-08T13:56:53.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | philschmid | null | philschmid/MiniLMv2-L6-H384-sst2 | 2 | null | transformers | 25,442 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: MiniLMv2-L6-H384-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9197247706422018
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-sst2
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- Accuracy: 0.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5787 | 1.0 | 264 | 0.3496 | 0.8624 |
| 0.3413 | 2.0 | 528 | 0.2599 | 0.8991 |
| 0.2716 | 3.0 | 792 | 0.2651 | 0.9048 |
| 0.2343 | 4.0 | 1056 | 0.2532 | 0.9197 |
| 0.2165 | 5.0 | 1320 | 0.2636 | 0.9151 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
malcolm/TSC_finetuning-sentiment-movie-model | 3f9e1fa33d1f0cb046acef59f4639a0443a1ae7a | 2022-04-08T16:44:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | malcolm | null | malcolm/TSC_finetuning-sentiment-movie-model | 2 | null | transformers | 25,443 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: TSC_finetuning-sentiment-movie-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_finetuning-sentiment-movie-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
- Accuracy: 0.9578
- F1: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/codeparrot-ds-sample-gpt-small-10epoch | 98c0b143a557687a688f39e1923aae41caba3252 | 2022-04-10T07:49:47.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-sample-gpt-small-10epoch | 2 | null | transformers | 25,444 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-gpt-small-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-10epoch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.29 | 0.94 | 1000 | 2.8452 |
| 2.3155 | 1.88 | 2000 | 2.3659 |
| 1.8817 | 2.82 | 3000 | 2.2085 |
| 1.6245 | 3.77 | 4000 | 2.1260 |
| 1.4314 | 4.71 | 5000 | 2.0705 |
| 1.2698 | 5.65 | 6000 | 2.0603 |
| 1.1281 | 6.59 | 7000 | 2.0599 |
| 1.0108 | 7.53 | 8000 | 2.0769 |
| 0.9167 | 8.47 | 9000 | 2.0870 |
| 0.8551 | 9.42 | 10000 | 2.0943 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hbruce11216/distilbert-base-uncased-finetuned-emotion | f355fabc69ee7fba187fd80cc4ac6c3c289f25ff | 2022-04-09T14:31:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | hbruce11216 | null | hbruce11216/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,445 | Entry not found |
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-3 | 80b4b833a5f9531a934cadbdf8872c880ed97156 | 2022-04-09T08:34:39.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-3 | 2 | null | transformers | 25,446 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 26.7383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b4_lr3e-3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3400
- Rouge1: 26.7383
- Rouge2: 10.1981
- Rougel: 22.8642
- Rougelsum: 26.0922
- Gen Len: 18.524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.2548 | 0.13 | 5000 | 2.9708 | 22.0519 | 6.7142 | 18.7677 | 21.4627 | 17.9546 |
| 3.1153 | 0.25 | 10000 | 2.9099 | 20.2838 | 5.8365 | 17.5009 | 19.7112 | 18.4981 |
| 3.0478 | 0.38 | 15000 | 2.8763 | 22.8282 | 7.3649 | 19.6843 | 22.2312 | 18.1331 |
| 3.0146 | 0.51 | 20000 | 2.8484 | 23.2465 | 7.4295 | 19.621 | 22.6246 | 18.5115 |
| 2.9572 | 0.64 | 25000 | 2.7902 | 23.8681 | 7.9617 | 20.4984 | 23.2066 | 18.5544 |
| 2.9425 | 0.76 | 30000 | 2.7577 | 23.4402 | 7.5289 | 19.7382 | 22.7941 | 18.4613 |
| 2.9075 | 0.89 | 35000 | 2.7343 | 23.0082 | 7.5408 | 19.8426 | 22.3832 | 18.1218 |
| 2.8705 | 1.02 | 40000 | 2.7136 | 23.9492 | 7.8861 | 20.3675 | 23.3035 | 18.4869 |
| 2.7967 | 1.14 | 45000 | 2.6923 | 24.2394 | 8.2895 | 20.7275 | 23.6127 | 18.3486 |
| 2.7794 | 1.27 | 50000 | 2.6639 | 24.4062 | 8.2481 | 20.8957 | 23.8077 | 18.4258 |
| 2.7776 | 1.4 | 55000 | 2.6321 | 24.6213 | 8.4161 | 21.0528 | 23.968 | 18.351 |
| 2.7397 | 1.53 | 60000 | 2.6116 | 24.16 | 8.3605 | 20.618 | 23.5037 | 18.6049 |
| 2.7199 | 1.65 | 65000 | 2.5846 | 24.2606 | 8.3829 | 20.6274 | 23.6252 | 18.4742 |
| 2.7044 | 1.78 | 70000 | 2.5663 | 25.0452 | 8.896 | 21.4554 | 24.4748 | 18.3143 |
| 2.6928 | 1.91 | 75000 | 2.5365 | 25.1312 | 9.008 | 21.6376 | 24.4963 | 18.5605 |
| 2.6281 | 2.03 | 80000 | 2.5209 | 25.5311 | 9.1521 | 21.729 | 24.8864 | 18.2597 |
| 2.5333 | 2.16 | 85000 | 2.4860 | 25.4834 | 9.2969 | 21.7257 | 24.8802 | 18.3831 |
| 2.5308 | 2.29 | 90000 | 2.4619 | 26.0526 | 9.605 | 22.2178 | 25.4353 | 18.4235 |
| 2.5136 | 2.42 | 95000 | 2.4356 | 25.9434 | 9.6537 | 22.2957 | 25.312 | 18.4647 |
| 2.4801 | 2.54 | 100000 | 2.4098 | 26.1109 | 9.7637 | 22.3844 | 25.4771 | 18.5765 |
| 2.4494 | 2.67 | 105000 | 2.3835 | 26.332 | 9.9472 | 22.4243 | 25.6933 | 18.5985 |
| 2.4393 | 2.8 | 110000 | 2.3590 | 26.6896 | 10.2248 | 22.8743 | 26.0665 | 18.4883 |
| 2.4071 | 2.93 | 115000 | 2.3400 | 26.7383 | 10.1981 | 22.8642 | 26.0922 | 18.524 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gary109/wav2vec2-base-finetuned-ks | eaae301c1860de3412a02c479376207dc5edf58c | 2022-04-09T08:34:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | gary109 | null | gary109/wav2vec2-base-finetuned-ks | 2 | null | transformers | 25,447 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0981
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6641 | 1.0 | 399 | 0.5522 | 0.9337 |
| 0.2698 | 2.0 | 798 | 0.2015 | 0.9715 |
| 0.1839 | 3.0 | 1197 | 0.1195 | 0.9793 |
| 0.1582 | 4.0 | 1596 | 0.1039 | 0.9791 |
| 0.1425 | 5.0 | 1995 | 0.0981 | 0.9801 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-4 | cbd7052f478a3022a4a17e092a6675257e31ee24 | 2022-04-09T19:14:54.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-4 | 2 | null | transformers | 25,448 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b4_lr3e-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 27.4024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b4_lr3e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2757
- Rouge1: 27.4024
- Rouge2: 10.7065
- Rougel: 23.3153
- Rougelsum: 26.7336
- Gen Len: 18.5506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.8424 | 0.13 | 5000 | 2.5695 | 25.2232 | 8.7617 | 21.2019 | 24.4949 | 18.4151 |
| 2.7334 | 0.25 | 10000 | 2.5229 | 25.3739 | 9.0477 | 21.5054 | 24.7553 | 18.3802 |
| 2.6823 | 0.38 | 15000 | 2.4857 | 26.341 | 9.6711 | 22.3446 | 25.7256 | 18.449 |
| 2.6607 | 0.51 | 20000 | 2.4540 | 26.0269 | 9.4722 | 22.0822 | 25.3602 | 18.4704 |
| 2.6137 | 0.64 | 25000 | 2.4326 | 26.2966 | 9.6815 | 22.4422 | 25.6326 | 18.3517 |
| 2.6077 | 0.76 | 30000 | 2.4108 | 26.0981 | 9.6221 | 22.1189 | 25.454 | 18.5079 |
| 2.5847 | 0.89 | 35000 | 2.3879 | 26.2675 | 9.6435 | 22.3738 | 25.6122 | 18.4838 |
| 2.5558 | 1.02 | 40000 | 2.3827 | 26.3458 | 9.7844 | 22.4718 | 25.7388 | 18.5097 |
| 2.4902 | 1.14 | 45000 | 2.3725 | 26.4987 | 9.9634 | 22.5398 | 25.8399 | 18.5912 |
| 2.4785 | 1.27 | 50000 | 2.3549 | 26.884 | 10.1136 | 22.8212 | 26.2262 | 18.4763 |
| 2.4822 | 1.4 | 55000 | 2.3467 | 26.8635 | 10.2266 | 22.9161 | 26.2252 | 18.5847 |
| 2.46 | 1.53 | 60000 | 2.3393 | 26.8602 | 10.1785 | 22.8453 | 26.1917 | 18.548 |
| 2.4523 | 1.65 | 65000 | 2.3330 | 26.91 | 10.237 | 22.9309 | 26.2372 | 18.5154 |
| 2.4525 | 1.78 | 70000 | 2.3203 | 27.073 | 10.4317 | 23.1355 | 26.4528 | 18.5063 |
| 2.4566 | 1.91 | 75000 | 2.3109 | 27.3853 | 10.5413 | 23.3455 | 26.7408 | 18.5258 |
| 2.4234 | 2.03 | 80000 | 2.3103 | 27.0836 | 10.4857 | 23.0538 | 26.409 | 18.5326 |
| 2.3686 | 2.16 | 85000 | 2.2986 | 27.311 | 10.6038 | 23.3068 | 26.6636 | 18.4874 |
| 2.3758 | 2.29 | 90000 | 2.2969 | 27.3509 | 10.6502 | 23.2764 | 26.6832 | 18.5438 |
| 2.3777 | 2.42 | 95000 | 2.2907 | 27.39 | 10.5842 | 23.3601 | 26.7433 | 18.5444 |
| 2.3624 | 2.54 | 100000 | 2.2875 | 27.3717 | 10.6098 | 23.3326 | 26.7232 | 18.5521 |
| 2.3543 | 2.67 | 105000 | 2.2811 | 27.4188 | 10.6919 | 23.3022 | 26.7426 | 18.564 |
| 2.366 | 2.8 | 110000 | 2.2763 | 27.4872 | 10.7079 | 23.4135 | 26.829 | 18.5399 |
| 2.3565 | 2.93 | 115000 | 2.2757 | 27.4024 | 10.7065 | 23.3153 | 26.7336 | 18.5506 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AlekseyKorshuk/test | 727917a0f7f552af9562a799d8217b2615e633f5 | 2022-06-18T11:54:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"huggan",
"gan",
"license:mit"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/test | 2 | null | transformers | 25,449 | ---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
``` |
alistvt/docalog | eed14b91bfebf37535752ac5d346b3e2934eae16 | 2022-04-10T19:25:14.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | alistvt | null | alistvt/docalog | 2 | null | transformers | 25,450 | Entry not found |
Wizounovziki/t5-base-devices-sum-ver1 | a59a0812fcc5d044c8c8cc3a27a8fd99a1fc21e6 | 2022-04-09T16:32:06.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Wizounovziki | null | Wizounovziki/t5-base-devices-sum-ver1 | 2 | null | transformers | 25,451 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-devices-sum-ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-devices-sum-ver1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0935
- Rouge1: 97.2294
- Rouge2: 80.1323
- Rougel: 97.245
- Rougelsum: 97.2763
- Gen Len: 4.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 186 | 0.2461 | 91.9436 | 71.232 | 91.9417 | 91.9585 | 4.6644 |
| No log | 2.0 | 372 | 0.1580 | 94.5247 | 76.1321 | 94.5044 | 94.5382 | 4.8953 |
| 0.488 | 3.0 | 558 | 0.1239 | 95.8673 | 78.1183 | 95.8862 | 95.8919 | 4.9102 |
| 0.488 | 4.0 | 744 | 0.1100 | 96.5746 | 78.9878 | 96.5848 | 96.5831 | 4.9102 |
| 0.488 | 5.0 | 930 | 0.1008 | 96.9074 | 79.5536 | 96.9143 | 96.9317 | 4.9291 |
| 0.1303 | 6.0 | 1116 | 0.0974 | 96.9274 | 79.6953 | 96.933 | 96.9473 | 4.9291 |
| 0.1303 | 7.0 | 1302 | 0.0969 | 96.8041 | 79.5073 | 96.817 | 96.8266 | 4.9271 |
| 0.1303 | 8.0 | 1488 | 0.0945 | 97.1496 | 79.9757 | 97.1529 | 97.1779 | 4.9534 |
| 0.089 | 9.0 | 1674 | 0.0944 | 97.253 | 80.1236 | 97.2619 | 97.2899 | 4.9595 |
| 0.089 | 10.0 | 1860 | 0.0935 | 97.2294 | 80.1323 | 97.245 | 97.2763 | 4.9507 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
UPF/DialoGPT-small-joshua | 5919a28ebec73d90d9500331a40954de03d3b76d | 2022-04-09T15:17:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | UPF | null | UPF/DialoGPT-small-joshua | 2 | null | transformers | 25,452 | Entry not found |
Splend1dchan/byt5-base-squad | 716aeca54d00be23772973705aeecfe06f785597 | 2022-04-10T17:16:02.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/byt5-base-squad | 2 | null | transformers | 25,453 | Entry not found |
Wizounovziki/t5-small-devices-sum-ver1 | 46c09eb2fdad73fd27ca44565e61d85efdce0820 | 2022-04-09T17:53:34.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Wizounovziki | null | Wizounovziki/t5-small-devices-sum-ver1 | 2 | null | transformers | 25,454 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-devices-sum-ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-devices-sum-ver1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Rouge1: 93.7171
- Rouge2: 73.3058
- Rougel: 93.7211
- Rougelsum: 93.689
- Gen Len: 4.7246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 185 | 0.6517 | 83.2503 | 55.7516 | 83.254 | 83.2722 | 4.4729 |
| No log | 2.0 | 370 | 0.4239 | 89.2246 | 65.7477 | 89.2223 | 89.2288 | 4.5575 |
| 1.0224 | 3.0 | 555 | 0.3459 | 91.0524 | 68.4783 | 91.0222 | 91.0312 | 4.6685 |
| 1.0224 | 4.0 | 740 | 0.3023 | 91.9741 | 70.1066 | 91.9886 | 91.9525 | 4.6549 |
| 1.0224 | 5.0 | 925 | 0.2797 | 92.667 | 71.3468 | 92.6706 | 92.6611 | 4.6969 |
| 0.3678 | 6.0 | 1110 | 0.2616 | 93.229 | 72.2805 | 93.222 | 93.1935 | 4.7179 |
| 0.3678 | 7.0 | 1295 | 0.2469 | 93.362 | 72.6985 | 93.3651 | 93.3294 | 4.7111 |
| 0.3678 | 8.0 | 1480 | 0.2401 | 93.5689 | 73.009 | 93.582 | 93.5377 | 4.7192 |
| 0.2902 | 9.0 | 1665 | 0.2350 | 93.7013 | 73.2685 | 93.7256 | 93.684 | 4.724 |
| 0.2902 | 10.0 | 1850 | 0.2335 | 93.7171 | 73.3058 | 93.7211 | 93.689 | 4.7246 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
masakhane/m2m100_418M_fr_bam_rel_news | b8e70da29c74d561b37f866d8134c278cbb1d579 | 2022-04-11T14:44:04.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_bam_rel_news | 2 | null | transformers | 25,455 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_bam_rel_news_ft | 20258c5a0305f7dec89f8c08a1adbb895c60f203 | 2022-04-11T15:12:37.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_bam_rel_news_ft | 2 | null | transformers | 25,456 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_fr_bam_rel_ft | 1d79daf14d694b314c66031ebcbee002e0688f34 | 2022-04-11T16:34:11.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_fr_bam_rel_ft | 2 | null | transformers | 25,457 | ---
license: afl-3.0
---
|
michaellutz/bert-finetuned-assertive-hillary | 739f97f720c7ea8f537e1ccd1dc882e5c2b18853 | 2022-04-16T07:06:14.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | michaellutz | null | michaellutz/bert-finetuned-assertive-hillary | 2 | null | transformers | 25,458 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-assertive-hillary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-assertive-hillary
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Wizounovziki/t5-small-devices-sum-ver2 | 8edc975ae70aab569de509cd2e9d39cd13325654 | 2022-04-10T01:20:36.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Wizounovziki | null | Wizounovziki/t5-small-devices-sum-ver2 | 2 | null | transformers | 25,459 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-devices-sum-ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-devices-sum-ver2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3679
- Rouge1: 90.6465
- Rouge2: 65.2833
- Rougel: 90.6707
- Rougelsum: 90.7313
- Gen Len: 4.4702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 91 | 1.0957 | 58.9566 | 33.4113 | 58.8004 | 58.8863 | 4.8308 |
| No log | 2.0 | 182 | 0.7017 | 78.9566 | 49.9716 | 78.9338 | 78.9643 | 4.3329 |
| No log | 3.0 | 273 | 0.5386 | 84.8786 | 56.9622 | 84.8204 | 84.9117 | 4.4577 |
| No log | 4.0 | 364 | 0.4693 | 87.9792 | 61.0779 | 87.8795 | 88.0098 | 4.4383 |
| No log | 5.0 | 455 | 0.4273 | 89.4667 | 63.1994 | 89.4169 | 89.5197 | 4.4743 |
| 1.0586 | 6.0 | 546 | 0.4002 | 89.6456 | 63.5041 | 89.6062 | 89.7042 | 4.4452 |
| 1.0586 | 7.0 | 637 | 0.3848 | 89.9993 | 64.2505 | 89.9775 | 90.0651 | 4.423 |
| 1.0586 | 8.0 | 728 | 0.3752 | 90.4249 | 64.819 | 90.4434 | 90.5111 | 4.4799 |
| 1.0586 | 9.0 | 819 | 0.3703 | 90.4689 | 65.0086 | 90.4954 | 90.5632 | 4.4632 |
| 1.0586 | 10.0 | 910 | 0.3679 | 90.6465 | 65.2833 | 90.6707 | 90.7313 | 4.4702 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
MEDT/ChatBot | 97bbb40eb18644f6d722a2c0b2262d39064851dd | 2022-04-10T14:53:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
] | conversational | false | MEDT | null | MEDT/ChatBot | 2 | null | transformers | 25,460 | ---
thumbnail: https://raw.githubusercontent.com/RuolinZheng08/twewy-discord-chatbot/main/gif-demo/icon.png
tags:
- conversational
license: mit
---
# DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 100 lines
for step in range(100):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` |
DioLiu/distilroberta-base-Ctr2 | 3baa7c4fcf7ee2c24437dc6a03a57424d82a68fe | 2022-04-13T16:19:20.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-Ctr2 | 2 | null | transformers | 25,461 | Entry not found |
brad1141/baseline_longformerv1 | b64c4177eb4713334825d469a99f8a2ac7625aef | 2022-04-10T13:01:30.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | brad1141 | null | brad1141/baseline_longformerv1 | 2 | null | transformers | 25,462 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: baseline_longformerv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_longformerv1
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7596
- Precision: 0.1333
- Recall: 0.15
- F1: 0.1400
- Accuracy: 0.1400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.8469 | 0.89 | 1 | 1.7596 | 0.1333 | 0.15 | 0.1400 | 0.1400 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
V3RX2000/xlm-roberta-base-finetuned-panx-fr | 9b367e963f58924b1c4f3b00f69102de6c2b46b6 | 2022-04-10T15:39:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | V3RX2000 | null | V3RX2000/xlm-roberta-base-finetuned-panx-fr | 2 | null | transformers | 25,463 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8354854938789199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
V3RX2000/xlm-roberta-base-finetuned-panx-all | 8103b33b17e4abc806b759bad423d90d8d589ca7 | 2022-04-10T16:07:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | V3RX2000 | null | V3RX2000/xlm-roberta-base-finetuned-panx-all | 2 | null | transformers | 25,464 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1759
- F1: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3038 | 1.0 | 835 | 0.1922 | 0.8065 |
| 0.1559 | 2.0 | 1670 | 0.1714 | 0.8422 |
| 0.1002 | 3.0 | 2505 | 0.1759 | 0.8527 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dennishe97/longformer-code-cl | 26d498cb051e0e8e320afd8ac7a2383fd6ea9566 | 2022-04-23T04:48:09.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"transformers"
] | feature-extraction | false | dennishe97 | null | dennishe97/longformer-code-cl | 2 | null | transformers | 25,465 | Entry not found |
karthajee/fatima_coding_fake_news_22 | 7f6c8dd1e0137d8a32d4b5fd801808e4d8d3d87f | 2022-04-10T19:17:04.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | karthajee | null | karthajee/fatima_coding_fake_news_22 | 2 | null | transformers | 25,466 | ---
license: mit
---
Hello World!
This repo contains a binary file and configuration details of a pretrained BERT model fine tuned on a train set created from 80% samples of true.csv and fake.csv files of the fake news detection challenge, as part of the Fatima Fellowship Application in 2022. |
yshAggarwal/finetuning-sentiment-model-3000-samples | f24f41a2f77bf7b5b5c8fc567ae5e509d5e17fe4 | 2022-04-13T13:43:51.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | yshAggarwal | null | yshAggarwal/finetuning-sentiment-model-3000-samples | 2 | null | transformers | 25,467 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9295
- Accuracy: 0.4568
- Precision: 0.3403
- Recall: 0.3408
- F1: 0.3364
- Classification Report Dict: {'0': {'precision': 0.02911392405063291, 'recall': 0.008808885484488702, 'f1-score': 0.013525433695971773, 'support': 2611}, '1': {'precision': 0.01141552511415525, 'recall': 0.037783375314861464, 'f1-score': 0.017533606078316773, 'support': 794}, '2': {'precision': 0.9802220680083276, 'recall': 0.9758203799654577, 'f1-score': 0.9780162714211529, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3402505057243719, 'recall': 0.34080421358826923, 'f1-score': 0.3363584370651471, 'support': 6300}, 'weighted avg': {'precision': 0.463940201511262, 'recall': 0.4568253968253968, 'f1-score': 0.4572370946620006, 'support': 6300}}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Classification Report Dict |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.2415 | 1.0 | 1838 | 3.9039 | 0.4595 | 0.3445 | 0.3481 | 0.3395 | {'0': {'precision': 0.048302872062663184, 'recall': 0.014170815779394868, 'f1-score': 0.021912940479715724, 'support': 2611}, '1': {'precision': 0.017884322678843226, 'recall': 0.05919395465994962, 'f1-score': 0.027469316189362943, 'support': 794}, '2': {'precision': 0.9673090158293186, 'recall': 0.9709844559585492, 'f1-score': 0.9691432511635926, 'support': 2895}, 'accuracy': 0.4595238095238095, 'macro avg': {'precision': 0.34449873685694166, 'recall': 0.3481164087992979, 'f1-score': 0.3395085026108904, 'support': 6300}, 'weighted avg': {'precision': 0.46677437333150673, 'recall': 0.4595238095238095, 'f1-score': 0.45788810107388767, 'support': 6300}} |
| 0.106 | 2.0 | 3676 | 4.4418 | 0.4548 | 0.3412 | 0.3441 | 0.3377 | {'0': {'precision': 0.01937984496124031, 'recall': 0.005744925315970892, 'f1-score': 0.008862629246676513, 'support': 2611}, '1': {'precision': 0.01713221601489758, 'recall': 0.05793450881612091, 'f1-score': 0.026444380569129056, 'support': 794}, '2': {'precision': 0.9869764167546639, 'recall': 0.968566493955095, 'f1-score': 0.9776847977684798, 'support': 2895}, 'accuracy': 0.45476190476190476, 'macro avg': {'precision': 0.34116282591026725, 'recall': 0.34408197602906226, 'f1-score': 0.33766393586142845, 'support': 6300}, 'weighted avg': {'precision': 0.46373023511339345, 'recall': 0.45476190476190476, 'f1-score': 0.4562753416943984, 'support': 6300}} |
| 0.0418 | 3.0 | 5514 | 4.5002 | 0.4568 | 0.3404 | 0.3420 | 0.3368 | {'0': {'precision': 0.02802547770700637, 'recall': 0.008425890463423975, 'f1-score': 0.012956419316843343, 'support': 2611}, '1': {'precision': 0.012898330804248861, 'recall': 0.042821158690176324, 'f1-score': 0.019825072886297375, 'support': 794}, '2': {'precision': 0.980201458839875, 'recall': 0.9747841105354059, 'f1-score': 0.9774852788361621, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3403750891170434, 'recall': 0.34201038656300203, 'f1-score': 0.33675559034643426, 'support': 6300}, 'weighted avg': {'precision': 0.46366651115761987, 'recall': 0.4568253968253968, 'f1-score': 0.4570460636410615, 'support': 6300}} |
| 0.0253 | 4.0 | 7352 | 4.9295 | 0.4568 | 0.3403 | 0.3408 | 0.3364 | {'0': {'precision': 0.02911392405063291, 'recall': 0.008808885484488702, 'f1-score': 0.013525433695971773, 'support': 2611}, '1': {'precision': 0.01141552511415525, 'recall': 0.037783375314861464, 'f1-score': 0.017533606078316773, 'support': 794}, '2': {'precision': 0.9802220680083276, 'recall': 0.9758203799654577, 'f1-score': 0.9780162714211529, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3402505057243719, 'recall': 0.34080421358826923, 'f1-score': 0.3363584370651471, 'support': 6300}, 'weighted avg': {'precision': 0.463940201511262, 'recall': 0.4568253968253968, 'f1-score': 0.4572370946620006, 'support': 6300}} |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1
|
cj-mills/pegasus-samsum | ef55b6a646f0f2954e20f97fec440926645cc9aa | 2022-04-10T20:36:27.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cj-mills | null | cj-mills/pegasus-samsum | 2 | null | transformers | 25,468 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7431 | 0.54 | 500 | 1.4875 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.10.3
|
osanseviero/distilbert-base-uncased-finetuned-clinc | 02d311c9ce6c27a6ad704642d2a5cd69bd06b7e8 | 2022-04-15T19:59:25.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | osanseviero | null | osanseviero/distilbert-base-uncased-finetuned-clinc | 2 | null | transformers | 25,469 | Entry not found |
Pavithra/codeparrot-ds-500sample-gpt-neo-10epoch | 24e22ffc5f591078d461f8080bdec6d67c32a01c | 2022-04-12T02:48:06.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-500sample-gpt-neo-10epoch | 2 | null | transformers | 25,470 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-500sample-gpt-neo-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-500sample-gpt-neo-10epoch
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5456
- eval_runtime: 87.6603
- eval_samples_per_second: 149.817
- eval_steps_per_second: 4.689
- epoch: 2.97
- step: 16000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gary109/wav2vec2-base-mirst500 | 8ebe5404cfaa0fdea96ba4c968d4df45dd13fbae | 2022-04-12T05:52:24.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:mir_st500",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | gary109 | null | gary109/wav2vec2-base-mirst500 | 2 | null | transformers | 25,471 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- mir_st500
metrics:
- accuracy
model-index:
- name: wav2vec2-base-mirst500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-mirst500
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /workspace/datasets/datasets/MIR_ST500/MIR_ST500_AUDIO_CLASSIFICATION.py dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8678
- Accuracy: 0.7017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1999 | 1.0 | 1304 | 1.1029 | 0.5877 |
| 1.0779 | 2.0 | 2608 | 0.9455 | 0.6555 |
| 0.9775 | 3.0 | 3912 | 0.9670 | 0.6523 |
| 0.9542 | 4.0 | 5216 | 0.8810 | 0.6946 |
| 0.9403 | 5.0 | 6520 | 0.8678 | 0.7017 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1+cu102
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Saitomar/TrOCR-Vit-Roberta-bn | 61b443c4b20e9185a85a7f64ce7b8baf580ebb8f | 2022-04-11T08:41:50.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | Saitomar | null | Saitomar/TrOCR-Vit-Roberta-bn | 2 | null | transformers | 25,472 | Entry not found |
Saitomar/TrOCR-Vit-Roberta-bn-2 | ab8e6a55b9b99b3723974859b795e05185642236 | 2022-04-11T10:13:50.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | Saitomar | null | Saitomar/TrOCR-Vit-Roberta-bn-2 | 2 | null | transformers | 25,473 | Entry not found |
optimum/MiniLMv2-L12-H384-finetuned-clinc | efe14b87e334a9e4f8f2c2481fadaf106750a1c6 | 2022-04-11T10:47:40.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | optimum | null | optimum/MiniLMv2-L12-H384-finetuned-clinc | 2 | null | transformers | 25,474 | ---
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9319354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L12-H384-distilled-from-RoBERTa-Large-finetuned-clinc
This model is a fine-tuned version of [nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L12-H384-distilled-from-RoBERTa-Large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5252
- Accuracy: 0.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 60 | 4.6555 | 0.1887 |
| No log | 2.0 | 120 | 3.8771 | 0.4784 |
| No log | 3.0 | 180 | 3.2507 | 0.7352 |
| 3.9668 | 4.0 | 240 | 2.7445 | 0.8365 |
| 3.9668 | 5.0 | 300 | 2.3475 | 0.8865 |
| 3.9668 | 6.0 | 360 | 2.0370 | 0.8926 |
| 3.9668 | 7.0 | 420 | 1.8099 | 0.9145 |
| 2.0924 | 8.0 | 480 | 1.6433 | 0.9190 |
| 2.0924 | 9.0 | 540 | 1.5563 | 0.9281 |
| 2.0924 | 10.0 | 600 | 1.5252 | 0.9319 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maximedb/latexical | dbf8b35d515b796b7d0ae4c286f90a860702d0e4 | 2022-04-11T12:07:10.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | maximedb | null | maximedb/latexical | 2 | null | transformers | 25,475 | Entry not found |
ZZ99/tapt_nbme_deberta_v3_base | 84cf3925da301455c7035d8372f5284850145a50 | 2022-04-19T21:18:00.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ZZ99 | null | ZZ99/tapt_nbme_deberta_v3_base | 2 | null | transformers | 25,476 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-mlm
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0870
- Accuracy: 0.7576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
rinapch/distilbert-media-bias | 24ff8adc2b536f92164f9c813677f70be2c2fe3a | 2022-04-11T15:36:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | false | rinapch | null | rinapch/distilbert-media-bias | 2 | null | transformers | 25,477 | ---
license: cc-by-sa-4.0
---
|
Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-5 | 7c52ad337d6471648dba745daebc2d670d452d26 | 2022-04-12T01:50:53.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-wikihow_3epoch_b8_lr3e-5 | 2 | null | transformers | 25,478 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch_b8_lr3e-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 25.9411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch_b8_lr3e-5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4836
- Rouge1: 25.9411
- Rouge2: 9.226
- Rougel: 21.9087
- Rougelsum: 25.2863
- Gen Len: 18.4076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.912 | 0.25 | 5000 | 2.6285 | 23.6659 | 7.8535 | 19.9837 | 22.9884 | 18.3867 |
| 2.8115 | 0.51 | 10000 | 2.5820 | 24.7979 | 8.4888 | 20.8719 | 24.1321 | 18.3292 |
| 2.767 | 0.76 | 15000 | 2.5555 | 25.0857 | 8.6437 | 21.149 | 24.4256 | 18.2981 |
| 2.742 | 1.02 | 20000 | 2.5330 | 25.3431 | 8.8393 | 21.425 | 24.7032 | 18.3749 |
| 2.7092 | 1.27 | 25000 | 2.5203 | 25.5338 | 8.9281 | 21.5378 | 24.9045 | 18.3399 |
| 2.6989 | 1.53 | 30000 | 2.5065 | 25.4792 | 8.9745 | 21.4941 | 24.8458 | 18.4565 |
| 2.6894 | 1.78 | 35000 | 2.5018 | 25.6815 | 9.1218 | 21.6958 | 25.0557 | 18.406 |
| 2.6897 | 2.03 | 40000 | 2.4944 | 25.8241 | 9.2127 | 21.8205 | 25.1801 | 18.4228 |
| 2.6664 | 2.29 | 45000 | 2.4891 | 25.8241 | 9.1662 | 21.7807 | 25.1615 | 18.4258 |
| 2.6677 | 2.54 | 50000 | 2.4855 | 25.7435 | 9.145 | 21.765 | 25.0858 | 18.4329 |
| 2.6631 | 2.8 | 55000 | 2.4836 | 25.9411 | 9.226 | 21.9087 | 25.2863 | 18.4076 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
irmgnrtop/xlm-roberta-base-finetuned-mlm-accelerate | c2ba8bb419bf4f87614881a25811613ceaf95c4f | 2022-04-11T19:51:40.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | irmgnrtop | null | irmgnrtop/xlm-roberta-base-finetuned-mlm-accelerate | 2 | null | transformers | 25,479 | Entry not found |
adasnew/t5-small-xsum | 58fcedfd601516b10232113584c1a949b120226d | 2022-04-11T22:35:12.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | adasnew | null | adasnew/t5-small-xsum | 2 | null | transformers | 25,480 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8641 | 0.04 | 500 | 2.6202 |
| 2.7466 | 0.08 | 1000 | 2.5660 |
| 2.8767 | 0.12 | 1500 | 2.5319 |
| 2.7099 | 0.16 | 2000 | 2.5107 |
| 2.7752 | 0.2 | 2500 | 2.4922 |
| 2.6037 | 0.24 | 3000 | 2.4800 |
| 2.8236 | 0.27 | 3500 | 2.4677 |
| 2.7089 | 0.31 | 4000 | 2.4581 |
| 2.7299 | 0.35 | 4500 | 2.4498 |
| 2.7498 | 0.39 | 5000 | 2.4420 |
| 2.6186 | 0.43 | 5500 | 2.4346 |
| 2.7817 | 0.47 | 6000 | 2.4288 |
| 2.5559 | 0.51 | 6500 | 2.4239 |
| 2.6725 | 0.55 | 7000 | 2.4186 |
| 2.6316 | 0.59 | 7500 | 2.4149 |
| 2.5561 | 0.63 | 8000 | 2.4115 |
| 2.5708 | 0.67 | 8500 | 2.4097 |
| 2.5861 | 0.71 | 9000 | 2.4052 |
| 2.6363 | 0.74 | 9500 | 2.4024 |
| 2.7435 | 0.78 | 10000 | 2.4003 |
| 2.7258 | 0.82 | 10500 | 2.3992 |
| 2.6113 | 0.86 | 11000 | 2.3983 |
| 2.6006 | 0.9 | 11500 | 2.3972 |
| 2.5684 | 0.94 | 12000 | 2.3960 |
| 2.6181 | 0.98 | 12500 | 2.3953 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Danastos/triviaqa_bert_el | d1ec76b092fcf055f9c032c23414dc0f8b43a6e8 | 2022-04-12T10:50:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:Danastos/triviaqa_el_custom",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/triviaqa_bert_el | 2 | null | transformers | 25,481 | ---
tags:
- generated_from_trainer
datasets:
- Danastos/triviaqa_el_custom
model-index:
- name: triviaqa_bert_el
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# triviaqa_bert_el
This model is a fine-tuned version of [Danastos/triviaqa_bert_el](https://huggingface.co/Danastos/triviaqa_bert_el) on the Danastos/triviaqa_el_custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mT0/mt0_xl_default_mixture_ckpt_1025000 | ea57ff7dd3e41c698cfb0ebe65c3333f2eeff285 | 2022-04-11T21:14:11.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mT0 | null | mT0/mt0_xl_default_mixture_ckpt_1025000 | 2 | null | transformers | 25,482 | Entry not found |
Kuray107/ls-timit-wsj0-100percent-supervised-meta | 18a726a218ac0c06b2181e0d780f7f88588d2bfd | 2022-04-12T11:19:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Kuray107 | null | Kuray107/ls-timit-wsj0-100percent-supervised-meta | 2 | null | transformers | 25,483 | ---
tags:
- generated_from_trainer
model-index:
- name: ls-timit-wsj0-100percent-supervised-meta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ls-timit-wsj0-100percent-supervised-meta
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0531
- Wer: 0.0214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1618 | 4.57 | 1000 | 0.0500 | 0.0432 |
| 0.0489 | 9.13 | 2000 | 0.0535 | 0.0291 |
| 0.0306 | 13.7 | 3000 | 0.0478 | 0.0275 |
| 0.0231 | 18.26 | 4000 | 0.0531 | 0.0214 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
chiba/electra-small-japanese-discriminator_test | 565a76b32dc0349c4a96a8882c613522f76897a4 | 2022-04-12T02:46:41.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | chiba | null | chiba/electra-small-japanese-discriminator_test | 2 | null | transformers | 25,484 | Entry not found |
Splend1dchan/wav2vec2-large-100h-lv60-self | 42ff3ad24ac9144911b947e82dca60f651cad37d | 2022-05-30T04:39:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.11430",
"arxiv:2006.11477",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-100h-lv60-self | 2 | null | transformers | 25,485 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-100h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
args: en
metrics:
- name: Test WER
type: wer
value: None
---
# Wav2Vec2-Large-100h-Lv60 + Self-Training
# This is a direct state_dict transfer from fairseq to huggingface, the weights are identical
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 100 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
They show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate facebook's **Splend1dchan/wav2vec2-large-100h-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("Splend1dchan/wav2vec2-large-100h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
<!-- *Result (WER)*:
| "clean" | "other" |
|---|---|
| untested | untested | --> |
jurader/autotrain-livedoor_news-732022289 | a0b883339396491d26445490e28e1b7317bec603 | 2022-04-12T08:07:57.000Z | [
"pytorch",
"bert",
"text-classification",
"ja",
"dataset:jurader/autotrain-data-livedoor_news",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | jurader | null | jurader/autotrain-livedoor_news-732022289 | 2 | 1 | transformers | 25,486 | ---
tags: autotrain
language: ja
widget:
- text: "I love AutoTrain 🤗"
datasets:
- jurader/autotrain-data-livedoor_news
co2_eq_emissions: 0.02886635131127639
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 732022289
- CO2 Emissions (in grams): 0.02886635131127639
## Validation Metrics
- Loss: 0.19849611818790436
- Accuracy: 0.9471186440677966
- Macro F1: 0.9441816841379956
- Micro F1: 0.9471186440677966
- Weighted F1: 0.9470801715002611
- Macro Precision: 0.945983665608131
- Micro Precision: 0.9471186440677966
- Weighted Precision: 0.9475574732458715
- Macro Recall: 0.9429694962141204
- Micro Recall: 0.9471186440677966
- Weighted Recall: 0.9471186440677966
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/jurader/autotrain-livedoor_news-732022289
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("jurader/autotrain-livedoor_news-732022289", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("jurader/autotrain-livedoor_news-732022289", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
lewtun/roberta-large-finetuned-clinc-1 | 66b8e0c066b2a6bbecf6ccb351d12071b6d125ee | 2022-04-12T09:43:04.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-1 | 2 | null | transformers | 25,487 | Entry not found |
satish860/distilbert-base-uncased-finetuned-emotion | 9d6e9565f617690dc7a3748a70268316c9d9d3a8 | 2022-04-23T05:21:30.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | satish860 | null | satish860/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,488 | Entry not found |
lewtun/roberta-large-finetuned-clinc-12 | 5d96325ba51cd64813627f80a921d3ff41c65206 | 2022-04-12T10:17:16.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-12 | 2 | null | transformers | 25,489 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-12
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9764516129032258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-12
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1429
- Accuracy: 0.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8662 | 1.0 | 954 | 0.3441 | 0.9339 |
| 0.158 | 2.0 | 1908 | 0.1498 | 0.9742 |
| 0.0469 | 3.0 | 2862 | 0.1429 | 0.9765 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
chiba/electra-small-japanese-generator_same_prepare | 2a819039ea83a96b85b50d155673bba9f63dbb29 | 2022-04-13T06:55:13.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | chiba | null | chiba/electra-small-japanese-generator_same_prepare | 2 | null | transformers | 25,490 | Entry not found |
lewtun/roberta-large-finetuned-clinc-123 | 4fff4a85a5faf88941ce20d752234c2139e1619e | 2022-04-12T12:05:51.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-123 | 2 | null | transformers | 25,491 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-123
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.925483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-123
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7226
- Accuracy: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0576 | 1.0 | 120 | 5.0269 | 0.0068 |
| 4.5101 | 2.0 | 240 | 2.9324 | 0.7158 |
| 1.9757 | 3.0 | 360 | 0.7226 | 0.9255 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
lewtun/roberta-large-finetuned-clinc-1234 | 698c5ed5a99e57e47265b43dbe672e8f6b0934af | 2022-04-12T12:28:09.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-1234 | 2 | null | transformers | 25,492 | Entry not found |
lewtun/roberta-large-finetuned-clinc-12345 | bcae84b4d6c81b3d70b71694d90bf8a6549e1a1a | 2022-04-12T12:47:44.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-12345 | 2 | null | transformers | 25,493 | Entry not found |
lewtun/roberta-large-finetuned-clinc-123456 | 8d8edb8a585b9252e2f8ebc48a9d848415856818 | 2022-04-12T13:10:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-123456 | 2 | null | transformers | 25,494 | Entry not found |
lewtun/roberta-large-finetuned-clinc-1234567 | 7bf30ef08d20466d6fcd7acce7a45243e30558fc | 2022-04-12T13:23:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-1234567 | 2 | null | transformers | 25,495 | Entry not found |
lewtun/roberta-large-finetuned-clinc-314 | 38662ea5266b6759fcaf8f51d3e87ee1c1ebd4cd | 2022-04-12T15:02:31.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-314 | 2 | null | transformers | 25,496 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-314
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.932258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-314
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7983
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0674 | 1.0 | 120 | 5.0406 | 0.0061 |
| 4.566 | 2.0 | 240 | 3.0712 | 0.7316 |
| 2.1553 | 3.0 | 360 | 0.7983 | 0.9323 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
lewtun/roberta-large-finetuned-clinc-3141 | bf9a8a004b60e1a2a4f578d9d47a44d7741caacc | 2022-04-12T15:33:05.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/roberta-large-finetuned-clinc-3141 | 2 | null | transformers | 25,497 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: roberta-large-finetuned-clinc-3141
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9738709677419355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-clinc-3141
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- Accuracy: 0.9739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6064 | 1.0 | 954 | 0.3041 | 0.9368 |
| 0.1392 | 2.0 | 1908 | 0.1590 | 0.9723 |
| 0.044 | 3.0 | 2862 | 0.1533 | 0.9739 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
xieb0001/distilbert-base-uncased-finetuned-cola | 6b38e02f18b62f2a7c8d4b5b660e6fe4233f6600 | 2022-04-17T17:29:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | xieb0001 | null | xieb0001/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 25,498 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5504031254980248
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8208
- Matthews Correlation: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5298 | 1.0 | 535 | 0.5310 | 0.4254 |
| 0.3522 | 2.0 | 1070 | 0.4959 | 0.5339 |
| 0.2358 | 3.0 | 1605 | 0.6418 | 0.5171 |
| 0.1741 | 4.0 | 2140 | 0.7327 | 0.5472 |
| 0.1273 | 5.0 | 2675 | 0.8208 | 0.5504 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kabelomalapane/test_model1.2_update | 68ddccf29b7bf66274d4a25399124dca96bf9d70 | 2022-04-13T18:04:46.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/test_model1.2_update | 2 | null | transformers | 25,499 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: test_model1.2_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model1.2_update
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-mul-en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6296
- Bleu: 4.0505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.