modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
obokkkk/wav2vec2-base-960h-finetuned_common_voice3 | 2b1027d2b574296631e619ce5d0393a4fa6fc10d | 2022-04-29T00:37:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | obokkkk | null | obokkkk/wav2vec2-base-960h-finetuned_common_voice3 | 1 | null | transformers | 31,500 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-960h-finetuned_common_voice3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned_common_voice3
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bdickson/electra-small-discriminator-finetuned-squad-finetuned-squad | 0704c5b8a7608ac57442d0c4f66c109728047eac | 2022-04-28T06:40:32.000Z | [
"pytorch",
"electra",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | bdickson | null | bdickson/electra-small-discriminator-finetuned-squad-finetuned-squad | 1 | null | transformers | 31,501 | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-discriminator-finetuned-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-squad-finetuned-squad
This model is a fine-tuned version of [bdickson/electra-small-discriminator-finetuned-squad](https://huggingface.co/bdickson/electra-small-discriminator-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
inhee/mbart-large-cc25-finetuned-ko-to-en2 | 2d63541c51153a7225c92b92c53508a7e08faa9f | 2022-04-28T22:55:27.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/mbart-large-cc25-finetuned-ko-to-en2 | 1 | null | transformers | 31,502 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-large-cc25-finetuned-ko-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-ko-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9388
- Bleu: 20.301
- Gen Len: 114.7908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| 2.253 | 1.0 | 664 | 1.1693 | 20.073 | 5.8056 |
| 1.1747 | 2.0 | 1328 | 0.9898 | 25.8761 | 7.1737 |
| 0.8827 | 3.0 | 1992 | 0.9286 | 25.4729 | 12.5726 |
| 0.5698 | 4.0 | 2656 | 0.9299 | 18.5817 | 33.1697 |
| 0.4985 | 5.0 | 3320 | 0.9388 | 20.301 | 114.7908 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Hyperspace/DialoGPT-small-Hyperdrive | ef24779e86e0fd9bd09bf6a458526055df09e633 | 2022-04-28T16:09:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Hyperspace | null | Hyperspace/DialoGPT-small-Hyperdrive | 1 | null | transformers | 31,503 | ---
tags:
- conversational
---
# Hyperdrive DialoGPT Model |
PSW/mixed_sim_seed1 | f7de62fe0c2a747681cc009f06124e901f88c7c0 | 2022-04-28T07:22:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim_seed1 | 1 | null | transformers | 31,504 | Entry not found |
hyerin/m2m100_418M-finetuned-en-to-ko | 832881556825c888ef7c1fa0596e3ce822202540 | 2022-04-29T09:40:12.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | hyerin | null | hyerin/m2m100_418M-finetuned-en-to-ko | 1 | null | transformers | 31,505 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: m2m100_418M-finetuned-en-to-ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-en-to-ko
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.98 | 36 | 1.9465 | 6.0644 | 21.3279 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/mixed_sim_seed27 | 8403865050b4afa277ee551645e8dca1c2848ff0 | 2022-04-28T08:10:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim_seed27 | 1 | null | transformers | 31,506 | Entry not found |
PSW/mixed_sim_seed42 | 56bbbabd58bb09d35c7e56be4619449a3abdcc5b | 2022-04-28T08:58:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim_seed42 | 1 | null | transformers | 31,507 | Entry not found |
MuhammadAhmad/question-model | abfe07839335b6efba4415b8f658f4b265775168 | 2022-04-28T09:06:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | MuhammadAhmad | null | MuhammadAhmad/question-model | 1 | null | transformers | 31,508 | Entry not found |
lilitket/20220428-094209 | 3a2a36798a56df8d3c6ae87bf759c831d2f1a9a7 | 2022-05-01T16:44:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220428-094209 | 1 | null | transformers | 31,509 | Entry not found |
PSW/mixed_sim2_seed1 | d1aa4a34ae6e6e5d81a9a0b9a30de7e0047f59f2 | 2022-04-28T10:01:18.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim2_seed1 | 1 | null | transformers | 31,510 | Entry not found |
PSW/mixed_sim2_seed27 | dfabc97c288ca10b30bb58adebc046eb04c92c13 | 2022-04-28T10:50:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim2_seed27 | 1 | null | transformers | 31,511 | Entry not found |
Barkavi/totto-t5-base-bleurt-121K | cd572d6f34a448befb2fb197d67abb73ea3d109a | 2022-04-28T17:41:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Barkavi | null | Barkavi/totto-t5-base-bleurt-121K | 1 | null | transformers | 31,512 | Entry not found |
PSW/mixed_sim2_seed42 | 5d10e4f4d6df87debb449f51a49c02b924e7b39d | 2022-04-28T11:38:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/mixed_sim2_seed42 | 1 | null | transformers | 31,513 | Entry not found |
asahi417/tner-roberta-large-tweet-st | a8760e9ebd057f26f1cc66404167ed87f22928fa | 2022-04-28T12:37:50.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-roberta-large-tweet-st | 1 | null | transformers | 31,514 | Entry not found |
Azuris/DialoGPT-medium-ekidona | 70017232770e8d8971c495937a692f3e20540f47 | 2022-04-28T14:36:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Azuris | null | Azuris/DialoGPT-medium-ekidona | 1 | null | transformers | 31,515 | ---
tags:
- conversational
---
# Echidona DialoGPT-Medium Model |
princeton-nlp/efficient_mlm_m0.60 | 8bcc7f964620932f5d8f84f96290f46a5a9c7c8f | 2022-04-28T18:58:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.60 | 1 | null | transformers | 31,516 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
princeton-nlp/efficient_mlm_m0.80 | 890fc0c4ebd4b04f2e12a6338f0f7be09d2d3e17 | 2022-04-28T18:57:52.000Z | [
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.08005",
"transformers",
"autotrain_compatible"
] | fill-mask | false | princeton-nlp | null | princeton-nlp/efficient_mlm_m0.80 | 1 | null | transformers | 31,517 | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
123tarunanand/albert-xlarge-finetuned | a6f1db9f3f3c9b555e01701d87562f9e457919ae | 2022-04-28T15:34:39.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | 123tarunanand | null | 123tarunanand/albert-xlarge-finetuned | 1 | null | transformers | 31,518 | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
davidenam/xlm-roberta-base-finetuned-panx-de | 81d2f4971a463bc6321f8b54f1d66c06e2734081 | 2022-04-28T21:56:19.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | davidenam | null | davidenam/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 31,519 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.862635800011376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1391
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 525 | 0.1675 | 0.8188 |
| No log | 2.0 | 1050 | 0.1388 | 0.8399 |
| No log | 3.0 | 1575 | 0.1391 | 0.8626 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Bistolero/it_es_80k | f35378a44e211f4011c425a7850e9fca454ae4d3 | 2022-04-28T21:31:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/it_es_80k | 1 | null | transformers | 31,520 | Entry not found |
awvik360/UncleRuckus | 3fdb535110c6ec5a427887c596ab842ba8f43ca0 | 2022-04-29T01:11:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | awvik360 | null | awvik360/UncleRuckus | 1 | null | transformers | 31,521 | ---
tags:
- conversational
---
# My Awesome Model |
Nausheen/bert-finetuned-squad-accelerate | 61e516f18cc09196681c9f0f41d9f2888f50b7ea | 2022-04-30T21:28:37.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Nausheen | null | Nausheen/bert-finetuned-squad-accelerate | 1 | null | transformers | 31,522 | Entry not found |
bkh6722/xlsr-vorarlbergerisch | b0d4518b8cb93a52b62184a9301fd70554aa9611 | 2022-04-29T04:45:04.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | bkh6722 | null | bkh6722/xlsr-vorarlbergerisch | 1 | null | transformers | 31,523 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-xlsr-vorarlbergerisch
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-vorarlbergerisch
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3193
- Wer: 0.3235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 62
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.6717 | 3.83 | 100 | 3.0247 | 1.0 |
| 2.485 | 7.68 | 200 | 1.5937 | 0.9046 |
| 0.784 | 11.53 | 300 | 1.2664 | 0.5 |
| 0.3689 | 15.38 | 400 | 1.2046 | 0.4696 |
| 0.2618 | 19.23 | 500 | 1.1289 | 0.4155 |
| 0.2088 | 23.08 | 600 | 0.9339 | 0.3623 |
| 0.1388 | 26.91 | 700 | 1.1448 | 0.3573 |
| 0.1042 | 30.75 | 800 | 1.1411 | 0.3606 |
| 0.0784 | 34.6 | 900 | 1.2046 | 0.3547 |
| 0.0607 | 38.45 | 1000 | 1.2243 | 0.3488 |
| 0.0459 | 42.3 | 1100 | 1.2387 | 0.3226 |
| 0.0273 | 46.15 | 1200 | 1.2123 | 0.3387 |
| 0.0195 | 49.98 | 1300 | 1.2232 | 0.3345 |
| 0.0188 | 53.83 | 1400 | 1.2656 | 0.3235 |
| 0.0132 | 57.68 | 1500 | 1.3377 | 0.3285 |
| 0.0089 | 61.53 | 1600 | 1.3193 | 0.3235 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
obokkkk/mt5-base_2 | cff4bf57f7583c20ab8d533be864ef93d12133a1 | 2022-04-30T05:52:12.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | obokkkk | null | obokkkk/mt5-base_2 | 1 | null | transformers | 31,524 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_2
This model is a fine-tuned version of [obokkkk/mt5-base](https://huggingface.co/obokkkk/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1742
- Bleu: 9.479
- Gen Len: 16.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 183 | 1.1834 | 9.3761 | 16.9129 |
| No log | 2.0 | 366 | 1.1791 | 9.422 | 16.9334 |
| 1.3969 | 3.0 | 549 | 1.1764 | 9.4432 | 16.9082 |
| 1.3969 | 4.0 | 732 | 1.1749 | 9.461 | 16.9157 |
| 1.3969 | 5.0 | 915 | 1.1742 | 9.479 | 16.9226 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
phosseini/atomic-roberta-large | 363638aabe4487b91fdb07a677c56e97f91d93b7 | 2022-04-29T07:19:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | phosseini | null | phosseini/atomic-roberta-large | 1 | null | transformers | 31,525 | Entry not found |
dbmdz/flair-hipe-2022-ajmc-all | 1bdfa084db7ae31c034ba24717d32a1373c2f12f | 2022-05-04T13:43:34.000Z | [
"pytorch",
"multilingual",
"flair",
"token-classification",
"sequence-tagger-model",
"license:mit"
] | token-classification | false | dbmdz | null | dbmdz/flair-hipe-2022-ajmc-all | 1 | null | flair | 31,526 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: multilingual
widget:
- text: "In editing the Fragments , I have availed myself of Mr . R . Ellis ’ acute remarks on them in the Cambridge Journal of Philology , Vol . IV , and that I am largely indebted , as every editor must now be , to the edition of the Tragic Fragments by A . Nauck , Leipzig , 1856 ."
- text: "459 . Skyros klang dem Athener etwa wie Pholegandros und Sikinos bei Solon Eleg . 1 , 4 , dem Römer Ulubrae , Butunti ."
- text: "Celles d ’ Ajax et des siens occupaient l ' extrême aile gauche , vers le promontoire Rhétée , et confinaient tout à la fois au retranchement et à la mer ( // . XIT1 , 681 ; Heynce , excursns cité ) ,"
license: mit
---
|
astrojihye/opus-mt-ko-en-finetuned-ko-to-en4 | 893cdef68a17574590c8a1010a59c75af7827001 | 2022-04-29T22:02:40.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | astrojihye | null | astrojihye/opus-mt-ko-en-finetuned-ko-to-en4 | 1 | null | transformers | 31,527 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9824
- Bleu: 0.5767
- Gen Len: 13.1529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 512
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 52 | 2.9824 | 0.5767 | 13.1529 |
| No log | 1.99 | 104 | 2.9824 | 0.5767 | 13.1529 |
| No log | 2.99 | 156 | 2.9824 | 0.5767 | 13.1529 |
| No log | 3.99 | 208 | 2.9824 | 0.5767 | 13.1529 |
| No log | 4.99 | 260 | 2.9824 | 0.5767 | 13.1529 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Kutay/fine_tuned_tweetqa_aip | e82c90fb397d594df51bd01507089b879b3cfc63 | 2022-04-29T15:10:16.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Kutay | null | Kutay/fine_tuned_tweetqa_aip | 1 | null | transformers | 31,528 | Entry not found |
AvengingPrime/Reddit_and_Procon | 647ad45bfe106b33be2d335e38079f1823776f7c | 2022-04-29T18:14:46.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | AvengingPrime | null | AvengingPrime/Reddit_and_Procon | 1 | null | transformers | 31,529 | Entry not found |
umarkhalid96/t5-small-trainings | a930dc6754cceea26483d503929a2149e5c06862 | 2022-04-29T18:36:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | umarkhalid96 | null | umarkhalid96/t5-small-trainings | 1 | null | transformers | 31,530 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-trainings
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-trainings
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2580
- Rouge1: 41.5251
- Rouge2: 19.8842
- Rougel: 36.4895
- Rougelsum: 37.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.1338 | 1.0 | 51 | 2.5825 | 35.4169 | 15.379 | 30.8859 | 31.524 |
| 2.5905 | 2.0 | 102 | 2.3975 | 38.4266 | 17.2571 | 33.5912 | 34.312 |
| 2.3881 | 3.0 | 153 | 2.3329 | 39.8082 | 19.1925 | 34.8269 | 35.5295 |
| 2.3167 | 4.0 | 204 | 2.2938 | 41.3488 | 20.1513 | 35.6879 | 36.5864 |
| 2.2357 | 5.0 | 255 | 2.2727 | 41.2457 | 19.5358 | 36.0033 | 36.8405 |
| 2.232 | 6.0 | 306 | 2.2645 | 41.2746 | 20.0345 | 35.9226 | 36.7001 |
| 2.1986 | 7.0 | 357 | 2.2595 | 41.7542 | 19.9428 | 36.6819 | 37.4718 |
| 2.1457 | 8.0 | 408 | 2.2580 | 41.5251 | 19.8842 | 36.4895 | 37.2565 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Siddhart/t5-small-finetuned-xsum | de928d97fb61710b1bde6f9dca1172a81645b4ea | 2022-04-30T00:04:50.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Siddhart | null | Siddhart/t5-small-finetuned-xsum | 1 | null | transformers | 31,531 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 23 | 2.7230 | 33.2094 | 14.0331 | 28.4433 | 29.4644 | 18.8947 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
moaiz237/wav2vec2-base-timit-demo-colab | e1de82e360c4ae201d56939d66b5ae25b54a04ee | 2022-04-30T07:51:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | moaiz237 | null | moaiz237/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 31,532 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4769
- Wer: 0.4305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2022 | 13.89 | 500 | 2.9267 | 0.9995 |
| 0.834 | 27.78 | 1000 | 0.4769 | 0.4305 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
charityking2358/taglish-electra-50K | 337402be67c7c5a9c637c70adc2fda5765915f2d | 2022-04-30T01:57:01.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-50K | 1 | null | transformers | 31,533 | Entry not found |
ChrisZeng/t5-v1_1-base-detox | c226e6232ce93df1d9e01cafc4925738716bc3b9 | 2022-04-30T05:23:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ChrisZeng | null | ChrisZeng/t5-v1_1-base-detox | 1 | null | transformers | 31,534 | Entry not found |
huggingtweets/itstomrobinson | 8119235db46bba7be64a0f5ab0ea16f6f5b3cdf8 | 2022-04-30T07:06:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/itstomrobinson | 1 | null | transformers | 31,535 | ---
language: en
thumbnail: http://www.huggingtweets.com/itstomrobinson/1651302371165/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388470365723168770/irz46Ykl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Robinson</div>
<div style="text-align: center; font-size: 14px;">@itstomrobinson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Robinson.
| Data | Tom Robinson |
| --- | --- |
| Tweets downloaded | 733 |
| Retweets | 40 |
| Short tweets | 52 |
| Tweets kept | 641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bluc7sk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itstomrobinson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itstomrobinson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
learningdude/wav2vec2-base-finetuned-ks | 941fd143f7791abc086f821f8ce1d0a65c6e35c5 | 2022-04-30T13:35:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | learningdude | null | learningdude/wav2vec2-base-finetuned-ks | 1 | null | transformers | 31,536 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0834
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6111 | 1.0 | 399 | 0.5123 | 0.9388 |
| 0.2901 | 2.0 | 798 | 0.1725 | 0.9782 |
| 0.1916 | 3.0 | 1197 | 0.1060 | 0.9834 |
| 0.1754 | 4.0 | 1596 | 0.0891 | 0.9829 |
| 0.1384 | 5.0 | 1995 | 0.0834 | 0.9840 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.12.1
|
lsanochkin/distilelectra-base | df04b7a65fb7ce5977c78cde717a024380a2e9dd | 2022-04-30T09:47:08.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | lsanochkin | null | lsanochkin/distilelectra-base | 1 | 1 | transformers | 31,537 | Entry not found |
doddle124578/wav2vec2-base-timit-demo-colab | be91568829e7600315aacf766528a7b917303d29 | 2022-04-30T14:40:55.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | doddle124578 | null | doddle124578/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 31,538 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Wer: 0.5652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6258 | 8.77 | 500 | 3.1693 | 1.0 |
| 1.4137 | 17.54 | 1000 | 0.6574 | 0.5652 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
charityking2358/taglish-electra-55K | cadcd4ba35234dd1a82d487991e55c07debdd92f | 2022-04-30T14:04:04.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-55K | 1 | null | transformers | 31,539 | Entry not found |
Davincilee/closure_system_door_inne-bert-base-uncased | 251f7d4ffdf8da9de7f19a3bcfc7ab65f821b26e | 2022-05-10T13:49:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Davincilee | null | Davincilee/closure_system_door_inne-bert-base-uncased | 1 | null | transformers | 31,540 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: closure_system_door_inne-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# closure_system_door_inne-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7321 | 1.0 | 2 | 2.5801 |
| 2.6039 | 2.0 | 4 | 2.0081 |
| 2.4556 | 3.0 | 6 | 2.3329 |
| 2.3587 | 4.0 | 8 | 2.4156 |
| 2.2565 | 5.0 | 10 | 2.0009 |
| 2.3489 | 6.0 | 12 | 1.7774 |
| 2.2622 | 7.0 | 14 | 2.2064 |
| 2.415 | 8.0 | 16 | 1.9671 |
| 2.1873 | 9.0 | 18 | 2.0729 |
| 2.2377 | 10.0 | 20 | 2.0052 |
| 2.352 | 11.0 | 22 | 1.9614 |
| 2.2347 | 12.0 | 24 | 2.2437 |
| 2.1113 | 13.0 | 26 | 1.7145 |
| 2.1939 | 14.0 | 28 | 1.5418 |
| 2.0645 | 15.0 | 30 | 2.1882 |
| 2.1499 | 16.0 | 32 | 2.0266 |
| 2.1432 | 17.0 | 34 | 2.3583 |
| 2.0656 | 18.0 | 36 | 2.3147 |
| 2.0348 | 19.0 | 38 | 2.2807 |
| 2.0502 | 20.0 | 40 | 1.7122 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ahmad573/wav2vec2-base-timit-demo-colab2 | c4f10b1301f09a138cac058b44f3ee65536fdec2 | 2022-04-30T19:12:53.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ahmad573 | null | ahmad573/wav2vec2-base-timit-demo-colab2 | 1 | null | transformers | 31,541 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1914
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.8196 | 7.04 | 500 | 3.2201 | 1.0 |
| 3.1517 | 14.08 | 1000 | 3.1876 | 1.0 |
| 3.1493 | 21.13 | 1500 | 3.1837 | 1.0 |
| 3.1438 | 28.17 | 2000 | 3.1914 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tahazakir/wav2vec2-base-timit-demo-colab0 | 71bcfedece2a8b7f1366935f50039776f07eac93 | 2022-04-30T18:01:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tahazakir | null | tahazakir/wav2vec2-base-timit-demo-colab0 | 1 | null | transformers | 31,542 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8768
- Wer: 0.6089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1121 | 13.89 | 500 | 2.9931 | 1.0 |
| 1.1475 | 27.78 | 1000 | 0.8768 | 0.6089 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
inhee/m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5 | 06a76bf5c91991729e7d3a8968472a175a55623d | 2022-05-02T05:54:00.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | inhee | null | inhee/m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5 | 1 | null | transformers | 31,543 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5
This model is a fine-tuned version of [inhee/m2m100_418M-finetuned-ko-to-en4](https://huggingface.co/inhee/m2m100_418M-finetuned-ko-to-en4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2863
- Bleu: 87.4185
- Gen Len: 9.7107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 0.3571 | 78.7464 | 9.5775 |
| No log | 2.0 | 210 | 0.3410 | 81.9462 | 9.6505 |
| No log | 3.0 | 315 | 0.3102 | 84.746 | 9.6732 |
| No log | 4.0 | 420 | 0.2929 | 86.5137 | 9.6997 |
| 0.2431 | 5.0 | 525 | 0.2863 | 87.4185 | 9.7107 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tahazakir/wav2vec2-base-timit-demo-colab1 | 57e2e39152d9e9a293f0ec4e7eb41142c17a8734 | 2022-04-30T22:47:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tahazakir | null | tahazakir/wav2vec2-base-timit-demo-colab1 | 1 | null | transformers | 31,544 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1918
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7104 | 13.89 | 500 | 3.2161 | 1.0 |
| 3.1868 | 27.78 | 1000 | 3.1918 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tahazakir/wav2vec2-base-timit-demo-colab2 | 0ed3f52956c86cf5ed67c96b14573b2effef20fb | 2022-04-30T22:54:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tahazakir | null | tahazakir/wav2vec2-base-timit-demo-colab2 | 1 | null | transformers | 31,545 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1899
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.0486 | 13.89 | 500 | 3.6570 | 1.0 |
| 3.2905 | 27.78 | 1000 | 3.1899 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kiljos/xlm-roberta-base-finetuned-panx-de | 06a68ddb195ec3f59ef4ebeb9cfa5a44576488a1 | 2022-04-30T20:52:00.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | kiljos | null | kiljos/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 31,546 | Entry not found |
moaiz237/wav2vec2-base-timit-moaiz_explast | 0d2fa142e70e21fb0a242f348fbd6697b7f1b410 | 2022-04-30T22:11:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | moaiz237 | null | moaiz237/wav2vec2-base-timit-moaiz_explast | 1 | null | transformers | 31,547 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_explast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_explast
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6714
- Wer: 0.5404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.034 | 13.89 | 500 | 1.0507 | 0.6871 |
| 0.6024 | 27.78 | 1000 | 0.6714 | 0.5404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ChrisZeng/bart-base-detox | c2af80760dc1f51770ba060f7e7ccc751574dfd5 | 2022-05-01T00:01:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ChrisZeng | null | ChrisZeng/bart-base-detox | 1 | null | transformers | 31,548 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-detox
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-detox
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5633 | 1.0 | 135 | 0.2524 |
| 0.2589 | 2.0 | 270 | 0.2193 |
| 0.2307 | 3.0 | 405 | 0.1993 |
| 0.2171 | 4.0 | 540 | 0.2002 |
| 0.2027 | 5.0 | 675 | 0.1937 |
| 0.1946 | 6.0 | 810 | 0.1972 |
| 0.1874 | 7.0 | 945 | 0.1917 |
| 0.1853 | 8.0 | 1080 | 0.1868 |
| 0.1811 | 9.0 | 1215 | 0.1890 |
| 0.1776 | 10.0 | 1350 | 0.1871 |
| 0.1798 | 11.0 | 1485 | 0.1858 |
| 0.1745 | 12.0 | 1620 | 0.1820 |
| 0.1689 | 13.0 | 1755 | 0.1827 |
| 0.1707 | 14.0 | 1890 | 0.1843 |
| 0.1658 | 15.0 | 2025 | 0.1834 |
| 0.1647 | 16.0 | 2160 | 0.1820 |
| 0.1645 | 17.0 | 2295 | 0.1837 |
| 0.1633 | 18.0 | 2430 | 0.1814 |
| 0.1612 | 19.0 | 2565 | 0.1815 |
| 0.1603 | 20.0 | 2700 | 0.1819 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220429
- Datasets 2.1.0
- Tokenizers 0.10.3
|
zasheza/Part1 | a4cf83022c4b069e977fcfeb35160c9095e842a3 | 2022-05-01T03:09:37.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zasheza | null | zasheza/Part1 | 1 | null | transformers | 31,549 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Part1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Part1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab2 | 353ef7c858879eacce82996f847c1127b0594320 | 2022-05-01T06:45:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab2 | 1 | null | transformers | 31,550 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2355
- Wer: 0.7320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.851 | 13.89 | 500 | 3.1260 | 1.0 |
| 1.9721 | 27.78 | 1000 | 1.2435 | 0.7992 |
| 0.5749 | 41.67 | 1500 | 1.1662 | 0.7374 |
| 0.291 | 55.56 | 2000 | 1.2355 | 0.7320 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab3 | 7958347e49f5f48b61d2ababc70a8bfbe0643770 | 2022-05-01T07:06:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab3 | 1 | null | transformers | 31,551 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1016
- Wer: 0.6704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0006 | 13.89 | 500 | 3.0706 | 1.0 |
| 1.8796 | 27.78 | 1000 | 1.1154 | 0.7414 |
| 0.548 | 41.67 | 1500 | 1.0826 | 0.7034 |
| 0.2747 | 55.56 | 2000 | 1.1016 | 0.6704 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab6 | 502c6b89c25ccdbc7d06a8db875551dbce44323d | 2022-05-01T07:17:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab6 | 1 | null | transformers | 31,552 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab6
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9394
- Wer: 0.5282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3117 | 7.35 | 500 | 3.1548 | 1.0 |
| 1.6732 | 14.71 | 1000 | 0.8857 | 0.6561 |
| 0.5267 | 22.06 | 1500 | 0.7931 | 0.6018 |
| 0.2951 | 29.41 | 2000 | 0.8152 | 0.5816 |
| 0.2013 | 36.76 | 2500 | 0.9060 | 0.5655 |
| 0.1487 | 44.12 | 3000 | 0.9201 | 0.5624 |
| 0.1189 | 51.47 | 3500 | 0.9394 | 0.5412 |
| 0.1004 | 58.82 | 4000 | 0.9394 | 0.5282 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab2 | 897efe9da3a9448b326f06e46def3ac71c5a8161 | 2022-05-01T07:02:11.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab2 | 1 | null | transformers | 31,553 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7414
- Wer: 0.5664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1999 | 13.89 | 500 | 2.8190 | 1.0 |
| 0.986 | 27.78 | 1000 | 0.7414 | 0.5664 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sherry7144/wav2vec2-base-timit-demo-colab1 | 3ddb7b0496fbecb603051f8ad9833c1b63be5f94 | 2022-05-01T08:08:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sherry7144 | null | sherry7144/wav2vec2-base-timit-demo-colab1 | 1 | null | transformers | 31,554 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0358
- Wer: 0.5729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3217 | 13.89 | 500 | 0.8951 | 0.5834 |
| 0.2263 | 27.78 | 1000 | 1.0358 | 0.5729 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab3 | 9e8ebcfd2394c25a236d9b45a1e9e973fd7f80eb | 2022-05-01T07:50:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab3 | 1 | null | transformers | 31,555 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8480
- Wer: 0.5608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7977 | 13.89 | 500 | 1.6491 | 0.8257 |
| 0.7393 | 27.78 | 1000 | 0.8480 | 0.5608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab4 | 23e463b8da75a62f7878d97fef365e675b1b34a9 | 2022-05-01T08:37:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sameearif88 | null | sameearif88/wav2vec2-base-timit-demo-colab4 | 1 | null | transformers | 31,556 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9149
- Wer: 0.5907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9363 | 13.89 | 500 | 2.7532 | 1.0 |
| 0.9875 | 27.78 | 1000 | 0.9149 | 0.5907 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
zasheza/wav2vec2-base-timit-demo-colab-1 | 180c977229ccac67d7de3d107e8dbecdc8135559 | 2022-05-01T16:08:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zasheza | null | zasheza/wav2vec2-base-timit-demo-colab-1 | 1 | null | transformers | 31,557 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9634
- Wer: 0.4398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8991 | 5.26 | 500 | 1.4319 | 0.7522 |
| 0.8555 | 10.53 | 1000 | 0.7895 | 0.5818 |
| 0.4584 | 15.79 | 1500 | 0.7198 | 0.5211 |
| 0.3096 | 21.05 | 2000 | 0.7983 | 0.5118 |
| 0.2165 | 26.32 | 2500 | 0.7893 | 0.4745 |
| 0.163 | 31.58 | 3000 | 0.8779 | 0.4589 |
| 0.1144 | 36.84 | 3500 | 0.9256 | 0.4540 |
| 0.0886 | 42.11 | 4000 | 0.9184 | 0.4530 |
| 0.0668 | 47.37 | 4500 | 0.9634 | 0.4398 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab60 | e3a88c8c75e3b4d5b1ff9a8086236ac1db80030f | 2022-05-01T12:26:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab60 | 1 | null | transformers | 31,558 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab60
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1975
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5799 | 7.04 | 500 | 3.2484 | 1.0 |
| 3.1859 | 14.08 | 1000 | 3.1951 | 1.0 |
| 3.1694 | 21.13 | 1500 | 3.1754 | 1.0 |
| 3.1637 | 28.17 | 2000 | 3.1818 | 1.0 |
| 3.1633 | 35.21 | 2500 | 3.1739 | 1.0 |
| 3.16 | 42.25 | 3000 | 3.2030 | 1.0 |
| 3.1602 | 49.3 | 3500 | 3.1974 | 1.0 |
| 3.1544 | 56.34 | 4000 | 3.1975 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab92 | 28b007555e508268aad459901c6568c6167c3226 | 2022-05-02T11:09:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hassnain | null | hassnain/wav2vec2-base-timit-demo-colab92 | 1 | null | transformers | 31,559 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab92
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab92
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6596
- eval_wer: 0.4164
- eval_runtime: 55.6472
- eval_samples_per_second: 12.615
- eval_steps_per_second: 1.581
- epoch: 2.85
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
JBW/da_en_translation | cfebc27ed56668ea9e3f9ddf85521a4c3fa768a3 | 2022-05-03T20:23:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | JBW | null | JBW/da_en_translation | 1 | null | transformers | 31,560 | Entry not found |
charityking2358/taglish-electra-60k | 8b3f31e0f0083ca4cefaf1601810e81979c113f3 | 2022-05-01T14:30:10.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-60k | 1 | null | transformers | 31,561 | Entry not found |
buidung2004/maialong_model | 178e8566e049856a458c87fdb3542c7ea1b6590e | 2022-05-01T15:43:57.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | buidung2004 | null | buidung2004/maialong_model | 1 | null | transformers | 31,562 | Entry not found |
dmoz47/DialoGPT-small-peterparker | 031548d755746aba0218e0a59d1c304a823bb159 | 2022-05-01T18:43:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | dmoz47 | null | dmoz47/DialoGPT-small-peterparker | 1 | null | transformers | 31,563 | ---
tags:
- conversational
---
# Peter Parker DialoGPT Model |
zoha/wav2vec2-base-common-voice-fa-second-colab | d2615a987c7e88442a9965c45e7b9a140bd6c8b1 | 2022-05-01T23:51:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | zoha | null | zoha/wav2vec2-base-common-voice-fa-second-colab | 1 | null | transformers | 31,564 | Entry not found |
sherry7144/wav2vec2-base-timit-demo-colab3 | 26fed870665d026a795b1c8ab89b12a2c22393b1 | 2022-05-02T04:04:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sherry7144 | null | sherry7144/wav2vec2-base-timit-demo-colab3 | 1 | null | transformers | 31,565 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8344
- Wer: 0.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0927 | 13.89 | 500 | 2.7346 | 1.0 |
| 0.9983 | 27.78 | 1000 | 0.8344 | 0.6055 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed_5percent | d56bbca3c6ad82b94f7b67f2bfefcfb212cea5bf | 2022-05-03T06:35:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-long_short_utt_removed_5percent | 1 | null | transformers | 31,566 | Entry not found |
JoanTirant/roberta-base-bne-finetuned-sqac | b3e9769eb3b54c12c26bba823a4a3a4e3091cb1d | 2022-05-02T12:52:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:sqac",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | JoanTirant | null | JoanTirant/roberta-base-bne-finetuned-sqac | 1 | null | transformers | 31,567 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sqac
model-index:
- name: roberta-base-bne-finetuned-sqac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0033 | 1.0 | 1196 | 0.8764 |
| 0.4659 | 2.0 | 2392 | 0.8998 |
| 0.152 | 3.0 | 3588 | 1.1857 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jhoonk/bert-base-uncased-finetuned-swag | c3950da4849178efb11f08fe41c0d60706f7e729 | 2022-05-09T10:41:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | jhoonk | null | jhoonk/bert-base-uncased-finetuned-swag | 1 | null | transformers | 31,568 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0337
- Accuracy: 0.7888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7451 | 1.0 | 4597 | 0.5944 | 0.7696 |
| 0.3709 | 2.0 | 9194 | 0.6454 | 0.7803 |
| 0.1444 | 3.0 | 13791 | 1.0337 | 0.7888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Paleontolog/bart_rus_summarizer | 3c53122b99b9d827f609665ee381835e095c1061 | 2022-05-11T14:51:54.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Paleontolog | null | Paleontolog/bart_rus_summarizer | 1 | null | transformers | 31,569 | Entry not found |
shiemn/bigdatageo-gelectra-base-new-embeddings | 9dc15fe4c6cd67166393fe57bf3291b15c7ab462 | 2022-05-02T11:38:40.000Z | [
"pytorch",
"electra",
"feature-extraction",
"transformers"
] | feature-extraction | false | shiemn | null | shiemn/bigdatageo-gelectra-base-new-embeddings | 1 | null | transformers | 31,570 | Entry not found |
charityking2358/taglish-electra-65k | 86545bc5585b5e217dcec98ec27c1f19c94a24cc | 2022-05-02T13:59:49.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-65k | 1 | null | transformers | 31,571 | Entry not found |
spasis/mt5-small-finetuned-amazon-en-es | 8fd15d033a8175cc186c7bec3e856ec704aead6a | 2022-05-03T13:30:22.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | spasis | null | spasis/mt5-small-finetuned-amazon-en-es | 1 | null | transformers | 31,572 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1185
- Rouge1: 17.2081
- Rouge2: 8.8374
- Rougel: 16.8033
- Rougelsum: 16.663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| No log | 1.0 | 303 | 3.9821 | 8.3993 | 2.0894 | 8.1427 | 8.135 |
| No log | 2.0 | 606 | 3.3511 | 13.1381 | 5.7193 | 12.8494 | 12.8375 |
| No log | 3.0 | 909 | 3.2235 | 15.2502 | 6.5903 | 14.728 | 14.612 |
| 5.8943 | 4.0 | 1212 | 3.1695 | 16.1725 | 8.1638 | 15.7655 | 15.6068 |
| 5.8943 | 5.0 | 1515 | 3.1579 | 16.3126 | 7.9727 | 15.8308 | 15.7236 |
| 5.8943 | 6.0 | 1818 | 3.1346 | 16.8323 | 8.088 | 16.3863 | 16.3343 |
| 5.8943 | 7.0 | 2121 | 3.1181 | 16.965 | 8.5799 | 16.6418 | 16.5064 |
| 3.7097 | 8.0 | 2424 | 3.1185 | 17.2081 | 8.8374 | 16.8033 | 16.663 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Dizzykong/gpt2-quests-100 | 589f4928a686b8c75a1a6f8a63ad500f903a427a | 2022-05-02T20:54:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Dizzykong | null | Dizzykong/gpt2-quests-100 | 1 | null | transformers | 31,573 | Entry not found |
masakhane/afri-mt5-base | fc72d2fd1bf66f620ed8668b1b65a12cf2b94d5a | 2022-05-12T13:51:08.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afri-mt5-base | 1 | null | transformers | 31,574 | ---
license: afl-3.0
---
|
masakhane/afri-byt5-base | 66e56aa6c92e314021839cf33e3af8299317d363 | 2022-05-12T13:51:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afri-byt5-base | 1 | null | transformers | 31,575 | ---
license: afl-3.0
---
|
masakhane/afri-mbart50 | 0d0948d56f9c30d43d322f3a115a2f53519e73ae | 2022-05-12T13:50:56.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afri-mbart50 | 1 | null | transformers | 31,576 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M-EN-NEWS | 3e39e9662b2c2c5c9aa7d4108416f0117cc74124 | 2022-05-12T13:43:31.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M-EN-NEWS | 1 | null | transformers | 31,577 | ---
license: afl-3.0
---
|
lilitket/20220503-001553 | 761eb884181921e55ccf6c59c005e69dd4eb3860 | 2022-05-03T01:50:58.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-001553 | 1 | null | transformers | 31,578 | Entry not found |
charityking2358/taglish-electra-70k | c83330a34e8edb1fc63278bcbf5ce27e056bef70 | 2022-05-03T04:28:15.000Z | [
"pytorch",
"transformers"
] | null | false | charityking2358 | null | charityking2358/taglish-electra-70k | 1 | null | transformers | 31,579 | Entry not found |
Nonegom/klue-roberta-large | ac6130989460aa5f1ab8fe73fa15df66756358bc | 2022-05-03T09:23:18.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | Nonegom | null | Nonegom/klue-roberta-large | 1 | 1 | transformers | 31,580 | ---
license: apache-2.0
---
|
DioLiu/distilroberta-base-Shake-Taylor | afd7a8a9ec6b4464c4ebdbcd9bfe0fcf276f3162 | 2022-05-03T12:17:01.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | DioLiu | null | DioLiu/distilroberta-base-Shake-Taylor | 1 | null | transformers | 31,581 | Entry not found |
efederici/it5-efficient-small-fanpage | bdb608bc31def75d3ed9ec2eed4728a15ee74f27 | 2022-05-03T13:14:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | efederici | null | efederici/it5-efficient-small-fanpage | 1 | null | transformers | 31,582 | ---
license: apache-2.0
tags:
- summarization
language:
- it
datasets:
- ARTeLab/fanpage
---
# it5-efficient-small-fanpage
It is a T5 ([IT5](https://huggingface.co/stefan-it/it5-efficient-small-el32)) efficient small model trained on [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage).
<p align="center">
<img src="https://compass-media.vogue.it/photos/61e574067f70d15c08312807/master/w_1600%2Cc_limit/DavideBalliano_UNTITLED_0215_%25206060_2021_1_Crop.jpeg" width="400"> </br>
Davide Balliano, Untitled
</p>
## Usage and Performance
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = T5Tokenizer.from_pretrained("efederici/it5-efficient-small-fanpage")
model = T5ForConditionalGeneration.from_pretrained("efederici/it5-efficient-small-fanpage")
```
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220503-123021 | da4085ac190712e10946011f63ad6e1c255e1666 | 2022-05-03T14:04:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220503-123021 | 1 | null | transformers | 31,583 | Entry not found |
masakhane/m2m100_418M_en_hau_news | 14cb0af3b2b84d57e3fde75da28d4bb4b7f96df7 | 2022-05-03T13:37:01.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_hau_news | 1 | null | transformers | 31,584 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_hau_en_news | 255f81695625976bffa18cfbcb1771703004d3b8 | 2022-05-03T13:37:07.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_hau_en_news | 1 | null | transformers | 31,585 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_hau_rel_news | d6b7a78cad8cbf630bdccd6a89241a50ea1e23ff | 2022-05-03T13:37:11.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_hau_rel_news | 1 | null | transformers | 31,586 | ---
license: afl-3.0
---
|
chebmarcel/sun2 | d7499b299db64861539bcf6957e01d9b1cdecddc | 2022-05-03T14:42:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | chebmarcel | null | chebmarcel/sun2 | 1 | null | transformers | 31,587 | Entry not found |
InSaiyan/DialoGPT-small-harrypotter | 809dab630a9ca0546c2986d16868ca76e258f48e | 2022-05-03T13:48:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | InSaiyan | null | InSaiyan/DialoGPT-small-harrypotter | 1 | null | transformers | 31,588 | ---
tags:
- conversational
---
# Harry Potter DialoGPT-small Model |
spasis/test-bert-finetuned-squad-accelerate | 10f3fd0be7ed463d0822d818393d942b3c935210 | 2022-05-03T14:47:47.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | spasis | null | spasis/test-bert-finetuned-squad-accelerate | 1 | null | transformers | 31,589 | Entry not found |
stevemobs/quales-iberlef | 121f86d98ec3be999b8b827cc8582c093aacc8a4 | 2022-05-06T09:53:05.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/quales-iberlef | 1 | null | transformers | 31,590 | Entry not found |
netoass/xlm-roberta-base-finetuned-panx-de | aa6dfb6d54c953f9f72fc345b60486a626c62a38 | 2022-05-03T15:26:11.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | netoass | null | netoass/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 31,591 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8654425558524246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1334
- F1: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 |
| 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 |
| 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PSW/min_senttrm_del_seed42 | 71de4be5fc74158268028b2b95a16c24923a81cb | 2022-05-03T15:16:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/min_senttrm_del_seed42 | 1 | null | transformers | 31,592 | Entry not found |
theojolliffe/bart-large-cnn-finetuned-roundup-2 | 3d92c73f23a09c91dd19e95a80b409aef323c916 | 2022-05-03T16:07:55.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-2 | 1 | null | transformers | 31,593 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2605
- Rouge1: 49.3582
- Rouge2: 29.7017
- Rougel: 30.6996
- Rougelsum: 46.3736
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3168 | 49.5253 | 30.0497 | 31.3982 | 46.9568 | 142.0 |
| No log | 2.0 | 264 | 1.2605 | 49.3582 | 29.7017 | 30.6996 | 46.3736 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PSW/max_senttrm_del_seed1 | aed3cfb71330c739f7d150a270534120d8410038 | 2022-05-03T16:00:04.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_senttrm_del_seed1 | 1 | null | transformers | 31,594 | Entry not found |
enimai/opus-mt-en-it-finetuned-en-to-it | db5e69415baf5e9dc5a2c20e83446c56829c3bac | 2022-05-03T16:45:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | enimai | null | enimai/opus-mt-en-it-finetuned-en-to-it | 1 | null | transformers | 31,595 | ---
license: apache-2.0
---
|
PSW/max_senttrm_del_seed27 | 918cb6284e352bdd3d710a13c26fb44045298909 | 2022-05-03T16:43:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/max_senttrm_del_seed27 | 1 | null | transformers | 31,596 | Entry not found |
ebonazza2910/model | 612bb8d65a652c2dc9fd2ae00845c4f668891f2e | 2022-05-08T23:12:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ebonazza2910 | null | ebonazza2910/model | 1 | null | transformers | 31,597 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Wer: 0.1301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9743 | 0.18 | 400 | 2.1457 | 1.0000 |
| 0.5747 | 0.36 | 800 | 0.3415 | 0.3456 |
| 0.3383 | 0.54 | 1200 | 0.2797 | 0.3095 |
| 0.2967 | 0.72 | 1600 | 0.2464 | 0.2568 |
| 0.2747 | 0.9 | 2000 | 0.2341 | 0.2466 |
| 0.2501 | 1.08 | 2400 | 0.2299 | 0.2317 |
| 0.2309 | 1.26 | 2800 | 0.2306 | 0.2328 |
| 0.2273 | 1.44 | 3200 | 0.2212 | 0.2375 |
| 0.225 | 1.62 | 3600 | 0.2193 | 0.2267 |
| 0.2204 | 1.8 | 4000 | 0.2157 | 0.2295 |
| 0.2256 | 1.98 | 4400 | 0.2165 | 0.2260 |
| 0.1941 | 2.17 | 4800 | 0.2105 | 0.2163 |
| 0.1925 | 2.35 | 5200 | 0.2098 | 0.2153 |
| 0.1925 | 2.53 | 5600 | 0.2120 | 0.2148 |
| 0.1952 | 2.71 | 6000 | 0.2063 | 0.2178 |
| 0.1971 | 2.89 | 6400 | 0.2100 | 0.2158 |
| 0.1888 | 3.07 | 6800 | 0.2131 | 0.2172 |
| 0.1702 | 3.25 | 7200 | 0.2155 | 0.2203 |
| 0.173 | 3.43 | 7600 | 0.2141 | 0.2254 |
| 0.174 | 3.61 | 8000 | 0.2017 | 0.2100 |
| 0.1802 | 3.79 | 8400 | 0.1998 | 0.2043 |
| 0.1717 | 3.97 | 8800 | 0.2070 | 0.2110 |
| 0.162 | 4.15 | 9200 | 0.2082 | 0.2157 |
| 0.154 | 4.33 | 9600 | 0.2163 | 0.2161 |
| 0.1598 | 4.51 | 10000 | 0.2070 | 0.2171 |
| 0.1576 | 4.69 | 10400 | 0.2034 | 0.2116 |
| 0.1601 | 4.87 | 10800 | 0.1990 | 0.2009 |
| 0.152 | 5.05 | 11200 | 0.1994 | 0.2039 |
| 0.1395 | 5.23 | 11600 | 0.2013 | 0.2046 |
| 0.1407 | 5.41 | 12000 | 0.2009 | 0.2022 |
| 0.1449 | 5.59 | 12400 | 0.1982 | 0.1961 |
| 0.1483 | 5.77 | 12800 | 0.2082 | 0.2054 |
| 0.1514 | 5.95 | 13200 | 0.1953 | 0.1985 |
| 0.138 | 6.13 | 13600 | 0.2046 | 0.1965 |
| 0.1322 | 6.31 | 14000 | 0.2076 | 0.1948 |
| 0.1372 | 6.5 | 14400 | 0.1968 | 0.1944 |
| 0.136 | 6.68 | 14800 | 0.1971 | 0.1963 |
| 0.1382 | 6.86 | 15200 | 0.2001 | 0.1990 |
| 0.1335 | 7.04 | 15600 | 0.2026 | 0.1935 |
| 0.1206 | 7.22 | 16000 | 0.1986 | 0.1938 |
| 0.1239 | 7.4 | 16400 | 0.2054 | 0.1919 |
| 0.1254 | 7.58 | 16800 | 0.1918 | 0.1939 |
| 0.1262 | 7.76 | 17200 | 0.1960 | 0.1947 |
| 0.126 | 7.94 | 17600 | 0.1932 | 0.1906 |
| 0.1169 | 8.12 | 18000 | 0.2037 | 0.1916 |
| 0.1142 | 8.3 | 18400 | 0.1999 | 0.1900 |
| 0.1151 | 8.48 | 18800 | 0.1920 | 0.1855 |
| 0.1121 | 8.66 | 19200 | 0.2007 | 0.1859 |
| 0.1135 | 8.84 | 19600 | 0.1932 | 0.1879 |
| 0.1158 | 9.02 | 20000 | 0.1916 | 0.1859 |
| 0.105 | 9.2 | 20400 | 0.1961 | 0.1831 |
| 0.1023 | 9.38 | 20800 | 0.1914 | 0.1791 |
| 0.1004 | 9.56 | 21200 | 0.1881 | 0.1787 |
| 0.1023 | 9.74 | 21600 | 0.1963 | 0.1817 |
| 0.1075 | 9.92 | 22000 | 0.1889 | 0.1861 |
| 0.103 | 10.1 | 22400 | 0.1975 | 0.1791 |
| 0.0952 | 10.28 | 22800 | 0.1979 | 0.1787 |
| 0.0957 | 10.46 | 23200 | 0.1922 | 0.1817 |
| 0.0966 | 10.65 | 23600 | 0.1953 | 0.1857 |
| 0.0997 | 10.83 | 24000 | 0.1902 | 0.1783 |
| 0.0981 | 11.01 | 24400 | 0.1959 | 0.1780 |
| 0.0868 | 11.19 | 24800 | 0.2056 | 0.1783 |
| 0.0905 | 11.37 | 25200 | 0.1958 | 0.1777 |
| 0.0892 | 11.55 | 25600 | 0.1935 | 0.1796 |
| 0.0891 | 11.73 | 26000 | 0.1968 | 0.1763 |
| 0.0888 | 11.91 | 26400 | 0.2043 | 0.1804 |
| 0.0842 | 12.09 | 26800 | 0.2043 | 0.1733 |
| 0.0828 | 12.27 | 27200 | 0.1964 | 0.1715 |
| 0.0827 | 12.45 | 27600 | 0.1991 | 0.1749 |
| 0.0844 | 12.63 | 28000 | 0.2014 | 0.1695 |
| 0.0837 | 12.81 | 28400 | 0.1973 | 0.1759 |
| 0.0872 | 12.99 | 28800 | 0.1975 | 0.1689 |
| 0.0778 | 13.17 | 29200 | 0.1979 | 0.1740 |
| 0.0759 | 13.35 | 29600 | 0.2093 | 0.1753 |
| 0.076 | 13.53 | 30000 | 0.1990 | 0.1731 |
| 0.0762 | 13.71 | 30400 | 0.2024 | 0.1690 |
| 0.0764 | 13.89 | 30800 | 0.2037 | 0.1709 |
| 0.0756 | 14.07 | 31200 | 0.2007 | 0.1716 |
| 0.0702 | 14.25 | 31600 | 0.2011 | 0.1680 |
| 0.0694 | 14.43 | 32000 | 0.2061 | 0.1683 |
| 0.0713 | 14.61 | 32400 | 0.2014 | 0.1687 |
| 0.0693 | 14.79 | 32800 | 0.1961 | 0.1658 |
| 0.071 | 14.98 | 33200 | 0.1921 | 0.1645 |
| 0.0659 | 15.16 | 33600 | 0.2079 | 0.1682 |
| 0.0659 | 15.34 | 34000 | 0.2046 | 0.1649 |
| 0.0685 | 15.52 | 34400 | 0.1994 | 0.1660 |
| 0.0663 | 15.7 | 34800 | 0.1970 | 0.1652 |
| 0.0678 | 15.88 | 35200 | 0.1961 | 0.1634 |
| 0.0644 | 16.06 | 35600 | 0.2141 | 0.1644 |
| 0.0596 | 16.24 | 36000 | 0.2098 | 0.1628 |
| 0.0629 | 16.42 | 36400 | 0.1969 | 0.1616 |
| 0.0598 | 16.6 | 36800 | 0.2026 | 0.1604 |
| 0.0628 | 16.78 | 37200 | 0.2050 | 0.1620 |
| 0.0616 | 16.96 | 37600 | 0.1958 | 0.1618 |
| 0.0538 | 17.14 | 38000 | 0.2093 | 0.1588 |
| 0.0573 | 17.32 | 38400 | 0.1995 | 0.1588 |
| 0.0555 | 17.5 | 38800 | 0.2077 | 0.1608 |
| 0.0555 | 17.68 | 39200 | 0.2036 | 0.1571 |
| 0.0578 | 17.86 | 39600 | 0.2045 | 0.1572 |
| 0.056 | 18.04 | 40000 | 0.2065 | 0.1593 |
| 0.0525 | 18.22 | 40400 | 0.2093 | 0.1580 |
| 0.0527 | 18.4 | 40800 | 0.2141 | 0.1585 |
| 0.0529 | 18.58 | 41200 | 0.2137 | 0.1585 |
| 0.0533 | 18.76 | 41600 | 0.2021 | 0.1558 |
| 0.0529 | 18.94 | 42000 | 0.2108 | 0.1535 |
| 0.05 | 19.12 | 42400 | 0.2114 | 0.1555 |
| 0.0479 | 19.31 | 42800 | 0.2091 | 0.1549 |
| 0.0509 | 19.49 | 43200 | 0.2145 | 0.1554 |
| 0.0486 | 19.67 | 43600 | 0.2061 | 0.1536 |
| 0.049 | 19.85 | 44000 | 0.2132 | 0.1548 |
| 0.0484 | 20.03 | 44400 | 0.2077 | 0.1523 |
| 0.0449 | 20.21 | 44800 | 0.2177 | 0.1529 |
| 0.0452 | 20.39 | 45200 | 0.2204 | 0.1517 |
| 0.0477 | 20.57 | 45600 | 0.2132 | 0.1517 |
| 0.048 | 20.75 | 46000 | 0.2119 | 0.1532 |
| 0.0469 | 20.93 | 46400 | 0.2109 | 0.1524 |
| 0.0439 | 21.11 | 46800 | 0.2118 | 0.1503 |
| 0.044 | 21.29 | 47200 | 0.2033 | 0.1474 |
| 0.0435 | 21.47 | 47600 | 0.2066 | 0.1485 |
| 0.0418 | 21.65 | 48000 | 0.2125 | 0.1491 |
| 0.0417 | 21.83 | 48400 | 0.2139 | 0.1487 |
| 0.0446 | 22.01 | 48800 | 0.2054 | 0.1493 |
| 0.039 | 22.19 | 49200 | 0.2179 | 0.1459 |
| 0.0414 | 22.37 | 49600 | 0.2118 | 0.1466 |
| 0.0394 | 22.55 | 50000 | 0.2104 | 0.1444 |
| 0.0381 | 22.73 | 50400 | 0.2095 | 0.1458 |
| 0.0382 | 22.91 | 50800 | 0.2193 | 0.1471 |
| 0.0391 | 23.09 | 51200 | 0.2143 | 0.1455 |
| 0.0365 | 23.27 | 51600 | 0.2198 | 0.1445 |
| 0.0368 | 23.46 | 52000 | 0.2151 | 0.1444 |
| 0.038 | 23.64 | 52400 | 0.2094 | 0.1439 |
| 0.038 | 23.82 | 52800 | 0.2137 | 0.1422 |
| 0.0374 | 24.0 | 53200 | 0.2180 | 0.1425 |
| 0.0352 | 24.18 | 53600 | 0.2207 | 0.1422 |
| 0.0343 | 24.36 | 54000 | 0.2269 | 0.1445 |
| 0.0353 | 24.54 | 54400 | 0.2222 | 0.1438 |
| 0.0348 | 24.72 | 54800 | 0.2224 | 0.1413 |
| 0.0342 | 24.9 | 55200 | 0.2146 | 0.1401 |
| 0.0337 | 25.08 | 55600 | 0.2246 | 0.1408 |
| 0.0327 | 25.26 | 56000 | 0.2161 | 0.1401 |
| 0.0339 | 25.44 | 56400 | 0.2212 | 0.1402 |
| 0.0324 | 25.62 | 56800 | 0.2203 | 0.1394 |
| 0.0319 | 25.8 | 57200 | 0.2145 | 0.1376 |
| 0.0317 | 25.98 | 57600 | 0.2147 | 0.1375 |
| 0.0302 | 26.16 | 58000 | 0.2213 | 0.1362 |
| 0.0309 | 26.34 | 58400 | 0.2218 | 0.1365 |
| 0.0308 | 26.52 | 58800 | 0.2167 | 0.1362 |
| 0.0294 | 26.7 | 59200 | 0.2169 | 0.1368 |
| 0.0297 | 26.88 | 59600 | 0.2163 | 0.1350 |
| 0.0289 | 27.06 | 60000 | 0.2188 | 0.1348 |
| 0.0284 | 27.24 | 60400 | 0.2172 | 0.1338 |
| 0.0278 | 27.42 | 60800 | 0.2230 | 0.1342 |
| 0.0283 | 27.6 | 61200 | 0.2233 | 0.1342 |
| 0.0292 | 27.79 | 61600 | 0.2238 | 0.1335 |
| 0.0286 | 27.97 | 62000 | 0.2218 | 0.1327 |
| 0.0262 | 28.15 | 62400 | 0.2220 | 0.1324 |
| 0.0274 | 28.33 | 62800 | 0.2182 | 0.1323 |
| 0.0279 | 28.51 | 63200 | 0.2170 | 0.1314 |
| 0.0269 | 28.69 | 63600 | 0.2228 | 0.1313 |
| 0.0264 | 28.87 | 64000 | 0.2209 | 0.1313 |
| 0.0254 | 29.05 | 64400 | 0.2224 | 0.1304 |
| 0.026 | 29.23 | 64800 | 0.2220 | 0.1302 |
| 0.0253 | 29.41 | 65200 | 0.2229 | 0.1304 |
| 0.0244 | 29.59 | 65600 | 0.2217 | 0.1298 |
| 0.025 | 29.77 | 66000 | 0.2223 | 0.1303 |
| 0.0255 | 29.95 | 66400 | 0.2220 | 0.1301 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
gbennett/xlm-roberta-base-finetuned-panx-de | 886feb94f8198653bd1b78bf1a652da6e24d810a | 2022-05-03T17:15:29.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | gbennett | null | gbennett/xlm-roberta-base-finetuned-panx-de | 1 | null | transformers | 31,598 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8654425558524246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1334
- F1: 0.8654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2541 | 1.0 | 525 | 0.1596 | 0.8242 |
| 0.1284 | 2.0 | 1050 | 0.1360 | 0.8499 |
| 0.0827 | 3.0 | 1575 | 0.1334 | 0.8654 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
laituan245/molt5-large | 1ad0b044adde4a7d11b9429427c97626945abbe1 | 2022-05-03T18:06:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.11817",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | laituan245 | null | laituan245/molt5-large | 1 | null | transformers | 31,599 | ---
license: apache-2.0
---
## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-large", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.