modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tau/False_large_t5_8_1024_0.15_1 | a924ab30d2b65b483b8379c4026244be73c659ea | 2022-05-05T13:58:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_t5_8_1024_0.15_1 | 0 | null | transformers | 37,300 | Entry not found |
tau/False_large_random_paraNone_sentNone_span0_itFalse_sargmax_rrFalse_8_1024_0.15_1 | 7354316ac639d1ec7cc09bea14f466f4b9e2d253 | 2022-05-05T13:59:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_random_paraNone_sentNone_span0_itFalse_sargmax_rrFalse_8_1024_0.15_1 | 0 | null | transformers | 37,301 | Entry not found |
tau/False_large_t5_lm_8_1024_0.15_1 | f1b682bcc3c9a1d7cc03d4b33f3a8e7028f872cd | 2022-05-05T13:58:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_t5_lm_8_1024_0.15_1 | 0 | null | transformers | 37,302 | Entry not found |
tau/False_large_rouge_paraNone_sent0_spanNone_itFalse_sargmax_rrFalse_8_1024_0.15_1 | 1ebffa8a864a8130c00907ecbfff66e1a478a456 | 2022-05-05T14:00:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | tau | null | tau/False_large_rouge_paraNone_sent0_spanNone_itFalse_sargmax_rrFalse_8_1024_0.15_1 | 0 | null | transformers | 37,303 | Entry not found |
vuiseng9/roberta-l-squadv1.1 | da7d4886fa25022693c27cba8a186bd59f40e30d | 2022-05-05T15:09:27.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | vuiseng9 | null | vuiseng9/roberta-l-squadv1.1 | 0 | null | transformers | 37,304 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: run05-roberta-large-squadv1.1-sl384-ds128-e2-tbs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run05-roberta-large-squadv1.1-sl384-ds128-e2-tbs16
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
# Train
```bash
python run_qa.py \
--model_name_or_path roberta-large \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 500 \
--learning_rate 3e-5 \
--fp16 \
--num_train_epochs 2 \
--per_device_eval_batch_size 64 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 1000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
MODEL=vuiseng9/roberta-l-squadv1.1
OUTDIR=eval-$(basename $MODEL)
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path $MODEL \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
```bash
eval_exact_match = 88.4674
eval_f1 = 94.3001
eval_samples = 10790
``` |
theojolliffe/bart-large-cnn-finetuned-roundup-3-1 | a7562ec132b5c3cb6bc4217bd115a7a421f18c6e | 2022-05-05T16:14:12.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-3-1 | 0 | null | transformers | 37,305 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-finetuned-roundup-3-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 258 | 1.3238 | 50.228 | 29.5898 | 30.1054 | 47.1265 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-finetuned-roundup-3-2 | 5ac8bb9f7ab97859e40a41f8f036d59d1781c003 | 2022-05-05T16:52:43.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-3-2 | 0 | null | transformers | 37,306 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-3-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2234
- Rouge1: 50.9324
- Rouge2: 30.5257
- Rougel: 32.2166
- Rougelsum: 47.9849
- Gen Len: 141.6562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 258 | 1.2775 | 50.0638 | 30.3036 | 32.9555 | 47.3277 | 142.0 |
| 1.1818 | 2.0 | 516 | 1.2234 | 50.9324 | 30.5257 | 32.2166 | 47.9849 | 141.6562 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-finetuned-roundup-3-4 | 45598d7f57b3bbedc80ada37b12633534f6d44ed | 2022-05-05T17:10:48.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-3-4 | 0 | null | transformers | 37,307 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-3-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-4
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1949
- Rouge1: 49.6216
- Rouge2: 29.1874
- Rougel: 32.042
- Rougelsum: 46.3679
- Gen Len: 140.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 258 | 1.2708 | 48.8914 | 29.2868 | 30.6203 | 46.2886 | 142.0 |
| 1.1751 | 2.0 | 516 | 1.1869 | 49.3567 | 28.4751 | 31.3075 | 46.3408 | 141.75 |
| 1.1751 | 3.0 | 774 | 1.1869 | 48.8335 | 28.4976 | 30.5434 | 46.2584 | 141.625 |
| 0.7391 | 4.0 | 1032 | 1.1949 | 49.6216 | 29.1874 | 32.042 | 46.3679 | 140.9688 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shoubhik/electra_abbv_20k_data_multilabel | c017cc61c2873e017c3d30895de0d468e0411279 | 2022-05-05T16:55:27.000Z | [
"pytorch"
] | null | false | shoubhik | null | shoubhik/electra_abbv_20k_data_multilabel | 0 | null | null | 37,308 | Entry not found |
theojolliffe/bart-large-cnn-finetuned-roundup-3-8 | 9570f2d36b3e419a30ead39dc7f0d1b489e3074b | 2022-05-05T18:21:12.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-3-8 | 0 | null | transformers | 37,309 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-3-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4132
- Rouge1: 49.6606
- Rouge2: 28.4044
- Rougel: 31.5419
- Rougelsum: 46.2463
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 258 | 1.2686 | 48.8513 | 28.7007 | 31.1199 | 45.7318 | 142.0 |
| 1.1738 | 2.0 | 516 | 1.1884 | 49.8072 | 28.9817 | 31.3611 | 46.9639 | 141.6875 |
| 1.1738 | 3.0 | 774 | 1.1970 | 49.3865 | 28.3426 | 30.0945 | 46.4681 | 141.3438 |
| 0.7069 | 4.0 | 1032 | 1.1984 | 50.6743 | 29.4728 | 31.5364 | 47.989 | 141.7188 |
| 0.7069 | 5.0 | 1290 | 1.2494 | 49.4461 | 28.9295 | 31.0334 | 46.6611 | 142.0 |
| 0.4618 | 6.0 | 1548 | 1.2954 | 50.6789 | 30.2783 | 32.1932 | 47.5929 | 142.0 |
| 0.4618 | 7.0 | 1806 | 1.3638 | 49.9476 | 30.223 | 32.4346 | 46.7383 | 142.0 |
| 0.3293 | 8.0 | 2064 | 1.4132 | 49.6606 | 28.4044 | 31.5419 | 46.2463 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tanviraumi/pegasus-samsum | 2001c596d6267dd6fb9bc067d320da67911b8536 | 2022-05-06T02:20:33.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tanviraumi | null | tanviraumi/pegasus-samsum | 0 | null | transformers | 37,310 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0.dev20220502+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nateraw/quickdraw | d09e3aa8ce0d28428a78211bdcf5c8cda26f1f6a | 2022-05-05T20:37:25.000Z | [
"pytorch",
"license:mit"
] | null | false | nateraw | null | nateraw/quickdraw | 0 | null | null | 37,311 | ---
license: mit
---
|
hsiehpinghan/distilbert-base-uncased-finetuned-imdb-accelerate | 1bf9d75adf0224be96687e10eb7f0d4ea117cafd | 2022-05-05T22:21:19.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | hsiehpinghan | null | hsiehpinghan/distilbert-base-uncased-finetuned-imdb-accelerate | 0 | null | transformers | 37,312 | Entry not found |
miazhao/deberta_base_model_s3_ccnet_airbnb_dat | cd4931049f741a4a7bdbe97fb8e564d4792b911c | 2022-05-07T15:38:48.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miazhao | null | miazhao/deberta_base_model_s3_ccnet_airbnb_dat | 0 | null | transformers | 37,313 | Entry not found |
nizamudma/t5-base-finetuned-cnn-2 | 512a36ec8e7f14e7e68f3fdec85c1f1b5d439ead | 2022-05-08T12:48:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | nizamudma | null | nizamudma/t5-base-finetuned-cnn-2 | 0 | null | transformers | 37,314 | Entry not found |
guhuawuli/distilgpt2-finetuned-wikitext2 | f6deb3bb6e74bcf1b24d29338243998ef7f50eea | 2022-05-06T08:26:24.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | guhuawuli | null | guhuawuli/distilgpt2-finetuned-wikitext2 | 0 | null | transformers | 37,315 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9109 | 1.0 | 584 | 3.6956 |
| 3.7555 | 2.0 | 1168 | 3.6712 |
| 3.7002 | 3.0 | 1752 | 3.6652 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-large-cnn-finetuned-roundup-4-8 | 6a281204e91790cd34388eece2cca83d0df30235 | 2022-05-06T13:12:03.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-roundup-4-8 | 0 | null | transformers | 37,316 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-roundup-4-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-4-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7882
- Rouge1: 54.2292
- Rouge2: 37.3874
- Rougel: 40.3261
- Rougelsum: 52.2155
- Gen Len: 141.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9442 | 53.1634 | 33.7744 | 35.5688 | 50.7523 | 142.0 |
| 1.1285 | 2.0 | 796 | 0.8305 | 54.0713 | 35.7079 | 37.5147 | 51.6285 | 142.0 |
| 0.6796 | 3.0 | 1194 | 0.7735 | 52.6656 | 34.0198 | 36.8075 | 50.1502 | 142.0 |
| 0.4572 | 4.0 | 1592 | 0.7759 | 53.6269 | 35.4308 | 38.3735 | 51.1369 | 141.7222 |
| 0.4572 | 5.0 | 1990 | 0.7527 | 54.4206 | 36.0907 | 38.0818 | 51.7885 | 142.0 |
| 0.3171 | 6.0 | 2388 | 0.7755 | 54.9642 | 38.0459 | 41.6383 | 52.8847 | 142.0 |
| 0.2269 | 7.0 | 2786 | 0.7801 | 54.1637 | 35.9853 | 39.5262 | 51.6562 | 142.0 |
| 0.1686 | 8.0 | 3184 | 0.7882 | 54.2292 | 37.3874 | 40.3261 | 52.2155 | 141.8889 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/distilbart-cnn-12-6-finetuned-roundup-4-2 | ffb72aa3b8d1917f540fc88d892668b7da5f05be | 2022-05-06T10:53:49.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-12-6-finetuned-roundup-4-2 | 0 | null | transformers | 37,317 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-roundup-4-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-roundup-4-2
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0837
- Rouge1: 52.9859
- Rouge2: 33.2082
- Rougel: 34.2505
- Rougelsum: 50.4194
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.1731 | 52.5053 | 33.0302 | 34.0812 | 49.9567 | 141.6481 |
| 1.4188 | 2.0 | 796 | 1.0837 | 52.9859 | 33.2082 | 34.2505 | 50.4194 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/distilbart-cnn-12-6-finetuned-roundup-4-4 | 2dd16dcb16cefa0ba10facbd12907f509849fcab | 2022-05-06T11:02:18.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-12-6-finetuned-roundup-4-4 | 0 | null | transformers | 37,318 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-roundup-4-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-roundup-4-4
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9444
- Rouge1: 53.2401
- Rouge2: 33.8737
- Rougel: 36.4695
- Rougelsum: 50.8979
- Gen Len: 141.5185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.1590 | 52.4465 | 33.664 | 35.2295 | 50.0326 | 141.6852 |
| 1.4068 | 2.0 | 796 | 1.0174 | 53.3143 | 34.1363 | 35.8354 | 51.2277 | 141.8889 |
| 0.9247 | 3.0 | 1194 | 0.9575 | 52.7672 | 33.1797 | 35.9617 | 50.3643 | 142.0 |
| 0.731 | 4.0 | 1592 | 0.9444 | 53.2401 | 33.8737 | 36.4695 | 50.8979 | 141.5185 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/justinsaas | ec82ecf81f4c05f11fdffc6ed6932ff783cb19b4 | 2022-05-06T10:20:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/justinsaas | 0 | null | transformers | 37,319 | ---
language: en
thumbnail: http://www.huggingtweets.com/justinsaas/1651832427066/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365425625616556045/NDhia9nF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Justin Welsh</div>
<div style="text-align: center; font-size: 14px;">@justinsaas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Justin Welsh.
| Data | Justin Welsh |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 7 |
| Short tweets | 510 |
| Tweets kept | 2730 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e9i64ex/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @justinsaas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e0opxlcx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e0opxlcx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/justinsaas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vai6hav/wav2vec2-large-xls-r-300m-turkish-colab | 218481e8ff37675a5517122e4614058c71824f05 | 2022-05-25T16:14:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vai6hav | null | vai6hav/wav2vec2-large-xls-r-300m-turkish-colab | 0 | null | transformers | 37,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
pidanr/pegasus-bbcnews | 7863e7b60d355691b39425210fc01aafb6025cf6 | 2022-05-06T20:48:59.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | pidanr | null | pidanr/pegasus-bbcnews | 0 | null | transformers | 37,321 | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-bbcnews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-bbcnews
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huxxx657/distilbert-base-uncased-finetuned-squad | 3b10f21c18355e3f10aa552cb404b7ec9a965219 | 2022-05-07T01:12:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,322 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8707 | 0.2 | 1107 | 1.6592 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220507-001726 | 4f44f1e7d6ac1dd5d0fc33e6d03072bebf1311f6 | 2022-05-07T05:15:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220507-001726 | 0 | null | transformers | 37,323 | Entry not found |
lilitket/20220507-074000 | 8f44ec521b4b59a07ecd72301da184f2063e5ef0 | 2022-05-07T11:15:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220507-074000 | 0 | null | transformers | 37,324 | Entry not found |
lilitket/20220507-074029 | cb6e9e02ec4939b036a6d1ad36b15fef3758ac5d | 2022-05-07T08:35:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220507-074029 | 0 | null | transformers | 37,325 | Entry not found |
xraychen/mqa-unsupsim | 24114ebfab848dd51ec6ca07762e296702e42705 | 2022-05-07T09:27:02.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | xraychen | null | xraychen/mqa-unsupsim | 0 | null | transformers | 37,326 | Entry not found |
xraychen/mqa-cls | 0472766365105197956c0283ea28764ac277090b | 2022-05-07T09:51:04.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | xraychen | null | xraychen/mqa-cls | 0 | null | transformers | 37,327 | Entry not found |
xraychen/mqa-sim | bf03e05bee31d1fdd001f626687649bf263a9902 | 2022-05-07T09:57:04.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | xraychen | null | xraychen/mqa-sim | 0 | null | transformers | 37,328 | Entry not found |
theojolliffe/distill-pegasus-cnn-16-4-finetuned-arxiv | 58d2dfa76aef05806d7ef11c9478d5d7488b8f10 | 2022-05-07T13:01:58.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distill-pegasus-cnn-16-4-finetuned-arxiv | 0 | null | transformers | 37,329 | ---
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: distill-pegasus-cnn-16-4-finetuned-arxiv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: arxiv
metrics:
- name: Rouge1
type: rouge
value: 31.2728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distill-pegasus-cnn-16-4-finetuned-arxiv
This model is a fine-tuned version of [sshleifer/distill-pegasus-cnn-16-4](https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2938
- Rouge1: 31.2728
- Rouge2: 10.8703
- Rougel: 20.7479
- Rougelsum: 27.6892
- Gen Len: 98.7916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.6506 | 1.0 | 12690 | 3.2938 | 31.2728 | 10.8703 | 20.7479 | 27.6892 | 98.7916 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/murahokusai-tszzl | 9dbb2e5883a3af3496f7ab4ce65e4900d81b1ff0 | 2022-05-07T10:57:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/murahokusai-tszzl | 0 | null | transformers | 37,330 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1518044179217145857/vtps7fRk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1520427808375332864/CcjPkyVR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">roon & Mura</div>
<div style="text-align: center; font-size: 14px;">@murahokusai-tszzl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from roon & Mura.
| Data | roon | Mura |
| --- | --- | --- |
| Tweets downloaded | 3237 | 502 |
| Retweets | 548 | 40 |
| Short tweets | 534 | 58 |
| Tweets kept | 2155 | 404 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238j5g0z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @murahokusai-tszzl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nrlpovc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nrlpovc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/murahokusai-tszzl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/murahokusai | dd8d7534a27d446aff1496de8e65eb4b2dd10a52 | 2022-05-07T12:20:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/murahokusai | 0 | null | transformers | 37,331 | ---
language: en
thumbnail: http://www.huggingtweets.com/murahokusai/1651926004236/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1520427808375332864/CcjPkyVR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mura</div>
<div style="text-align: center; font-size: 14px;">@murahokusai</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mura.
| Data | Mura |
| --- | --- |
| Tweets downloaded | 503 |
| Retweets | 40 |
| Short tweets | 58 |
| Tweets kept | 405 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/boerayr7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @murahokusai's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hvo2sh8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hvo2sh8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/murahokusai')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/bart-large-cnn-finetuned-pubmed-finetuned-roundup-e8 | 9d03db0f1f4fc28004c4de727fcb99e067e44841 | 2022-05-07T12:42:55.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-pubmed-finetuned-roundup-e8 | 0 | null | transformers | 37,332 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-pubmed-finetuned-roundup-e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-pubmed-finetuned-roundup-e8
This model is a fine-tuned version of [theojolliffe/bart-large-cnn-finetuned-pubmed](https://huggingface.co/theojolliffe/bart-large-cnn-finetuned-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1034
- Rouge1: 48.4605
- Rouge2: 28.5961
- Rougel: 32.5389
- Rougelsum: 45.7358
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 1.4278 | 47.952 | 29.4059 | 34.273 | 45.7244 | 142.0 |
| No log | 2.0 | 50 | 1.4351 | 48.7561 | 29.4049 | 30.631 | 46.4074 | 142.0 |
| No log | 3.0 | 75 | 1.5375 | 50.0069 | 31.4237 | 32.0834 | 47.679 | 142.0 |
| No log | 4.0 | 100 | 1.6647 | 49.6919 | 28.8821 | 31.9357 | 47.0396 | 142.0 |
| No log | 5.0 | 125 | 1.8070 | 47.8472 | 26.6979 | 30.7049 | 44.5848 | 142.0 |
| No log | 6.0 | 150 | 1.9981 | 47.8352 | 27.0966 | 31.4529 | 46.5251 | 142.0 |
| No log | 7.0 | 175 | 2.0904 | 48.6272 | 30.5493 | 32.7827 | 46.8462 | 142.0 |
| No log | 8.0 | 200 | 2.1034 | 48.4605 | 28.5961 | 32.5389 | 45.7358 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220507-122935 | 14416443c611d41acc8a0dd586317e5d4d416b52 | 2022-05-07T20:26:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220507-122935 | 0 | null | transformers | 37,333 | Entry not found |
miazhao/deberta_base_model_train_airbnb_ccnet_dat | 2fb416cb052ed95c04637693f4814ac294075227 | 2022-05-08T00:05:55.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miazhao | null | miazhao/deberta_base_model_train_airbnb_ccnet_dat | 0 | null | transformers | 37,334 | Entry not found |
huggingtweets/drmichaellevin | c6bc84c066ba22f40b036904d30e771df7250aef | 2022-05-07T21:05:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/drmichaellevin | 0 | null | transformers | 37,335 | ---
language: en
thumbnail: http://www.huggingtweets.com/drmichaellevin/1651957516663/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3727122709/dad151a96c197bb70f5ae7e4c42f6bd9_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Michael Levin</div>
<div style="text-align: center; font-size: 14px;">@drmichaellevin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Michael Levin.
| Data | Michael Levin |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 329 |
| Short tweets | 617 |
| Tweets kept | 2303 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/23duqnbi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @drmichaellevin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pwpb2w2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pwpb2w2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/drmichaellevin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e4 | b9028fa54e0fedcfcf2dbd8cf1e4173fabbd0626 | 2022-05-07T22:40:15.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e4 | 0 | null | transformers | 37,336 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e4
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8874
- Rouge1: 53.8193
- Rouge2: 34.9325
- Rougel: 37.7425
- Rougelsum: 51.3935
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5003 | 1.0 | 795 | 1.0794 | 51.738 | 31.9115 | 34.8247 | 49.603 | 142.0 |
| 0.8923 | 2.0 | 1590 | 0.9549 | 53.7436 | 35.1983 | 37.8041 | 51.8837 | 142.0 |
| 0.7274 | 3.0 | 2385 | 0.9023 | 54.2052 | 35.8112 | 38.4288 | 52.1851 | 142.0 |
| 0.5554 | 4.0 | 3180 | 0.8874 | 53.8193 | 34.9325 | 37.7425 | 51.3935 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lanesket/RASTaBERTa | c6ccf7e86029224d8c014f4ac0192017d3ed4c15 | 2022-05-09T11:20:21.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | lanesket | null | lanesket/RASTaBERTa | 0 | null | transformers | 37,337 | Entry not found |
lilitket/20220507-235206 | 517fbabe82715d933332a9febf462650532e55fe | 2022-05-08T02:33:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220507-235206 | 0 | null | transformers | 37,338 | Entry not found |
sam999/albert-base-v1-finetuned-squad | c2ac4a431013daf1012bbcec4c19b4a97905d55d | 2022-05-10T01:30:24.000Z | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | sam999 | null | sam999/albert-base-v1-finetuned-squad | 0 | null | transformers | 37,339 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-base-v1-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v1-finetuned-squad
This model is a fine-tuned version of [albert-base-v1](https://huggingface.co/albert-base-v1) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9271 | 1.0 | 5540 | 0.9426 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huxxx657/roberta-base-finetuned-squad | 2fafeee4595aa8ebf196a19ab39f584e2f880773 | 2022-05-08T19:57:20.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-squad | 0 | null | transformers | 37,340 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8557 | 1.0 | 8239 | 0.8152 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e4 | 3c069b89a48ee0ea6b1bfe0af15c0c10ea4b32e4 | 2022-05-08T08:16:47.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e4 | 0 | null | transformers | 37,341 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e4
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7948
- Rouge1: 52.8917
- Rouge2: 33.9404
- Rougel: 37.0138
- Rougelsum: 50.2918
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9591 | 52.9984 | 33.2737 | 34.5312 | 50.3676 | 142.0 |
| 1.1253 | 2.0 | 796 | 0.8372 | 54.1354 | 34.9653 | 37.381 | 51.0988 | 142.0 |
| 0.6899 | 3.0 | 1194 | 0.7997 | 52.884 | 34.0614 | 37.6308 | 50.222 | 141.6296 |
| 0.4982 | 4.0 | 1592 | 0.7948 | 52.8917 | 33.9404 | 37.0138 | 50.2918 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e4 | 4f5298ebc65cdaa501cc6a6162bdfc8f8043a08a | 2022-05-08T08:13:48.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distill-pegasus-cnn-arxiv-pubmed-v3-e4 | 0 | null | transformers | 37,342 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distill-pegasus-cnn-arxiv-pubmed-v3-e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distill-pegasus-cnn-arxiv-pubmed-v3-e4
This model is a fine-tuned version of [theojolliffe/distill-pegasus-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distill-pegasus-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8962
- Rouge1: 49.5676
- Rouge2: 30.7141
- Rougel: 34.191
- Rougelsum: 45.0269
- Gen Len: 125.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.7729 | 1.0 | 795 | 2.1332 | 48.4776 | 29.8247 | 33.8775 | 44.0771 | 126.2407 |
| 2.3362 | 2.0 | 1590 | 1.9953 | 48.7574 | 30.0148 | 33.8955 | 44.3967 | 126.2407 |
| 2.2766 | 3.0 | 2385 | 1.9159 | 49.3004 | 30.5548 | 34.5702 | 44.8082 | 125.5 |
| 2.1815 | 4.0 | 3180 | 1.8962 | 49.5676 | 30.7141 | 34.191 | 45.0269 | 125.8333 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-6_H-768_A-12_wiki103 | 90ed8836cbad49441a8b835d2b9d071db328cd92 | 2022-05-08T08:06:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-768_A-12_wiki103 | 0 | null | transformers | 37,343 | Entry not found |
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e2 | 5890b53936a7a378677d74c99504e6de97859cf3 | 2022-05-08T09:15:04.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e2 | 0 | null | transformers | 37,344 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e2
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9021
- Rouge1: 53.515
- Rouge2: 33.4314
- Rougel: 35.1718
- Rougelsum: 50.8086
- Gen Len: 141.7963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9656 | 52.7601 | 33.0555 | 34.4738 | 50.449 | 142.0 |
| 1.1333 | 2.0 | 796 | 0.9021 | 53.515 | 33.4314 | 35.1718 | 50.8086 | 141.7963 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e1 | 14d0b0f1e642ae7f60472c0f26dc6c6a8c6fc7f1 | 2022-05-08T10:40:52.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e1 | 0 | null | transformers | 37,345 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e1
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|:---------:|:-------:|
| No log | 1.0 | 398 | 1.0222 | 52.722 | 33.3965 | 35.513 | 50.3104 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nestoralvaro/t5-small-finetuned-xsum | 39150a716bab543e9aaf975be982af6680086e43 | 2022-05-08T15:50:12.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/t5-small-finetuned-xsum | 0 | null | transformers | 37,346 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 21.4274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2928
- Rouge1: 21.4274
- Rouge2: 8.18
- Rougel: 21.3234
- Rougelsum: 21.3185
- Gen Len: 4.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5264 | 1.0 | 12753 | 2.2928 | 21.4274 | 8.18 | 21.3234 | 21.3185 | 4.9993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e8 | 415e19b94fb401fc51057a5aee68e8009200c870 | 2022-05-08T13:21:22.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e8 | 0 | null | transformers | 37,347 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e8
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7778
- Rouge1: 55.6307
- Rouge2: 38.1306
- Rougel: 40.7127
- Rougelsum: 53.3739
- Gen Len: 141.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9563 | 53.0477 | 33.0365 | 35.4483 | 50.5525 | 142.0 |
| 1.1233 | 2.0 | 796 | 0.8260 | 53.8629 | 34.5031 | 37.08 | 51.129 | 142.0 |
| 0.6753 | 3.0 | 1194 | 0.7898 | 53.6508 | 34.7559 | 37.0541 | 50.7535 | 142.0 |
| 0.4532 | 4.0 | 1592 | 0.7765 | 53.2109 | 34.5657 | 37.3743 | 50.9145 | 142.0 |
| 0.4532 | 5.0 | 1990 | 0.7551 | 55.0766 | 37.5722 | 40.0653 | 52.5655 | 142.0 |
| 0.3142 | 6.0 | 2388 | 0.7744 | 54.7674 | 36.7664 | 39.9027 | 52.1542 | 142.0 |
| 0.2257 | 7.0 | 2786 | 0.7728 | 55.6258 | 37.9929 | 40.8985 | 53.4423 | 142.0 |
| 0.1674 | 8.0 | 3184 | 0.7778 | 55.6307 | 38.1306 | 40.7127 | 53.3739 | 141.9815 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220508-155639 | 85f72f97bfc2e4a8c0e80f010acd003a9e246c2a | 2022-05-08T13:39:38.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220508-155639 | 0 | null | transformers | 37,348 | Entry not found |
prashanth/mbart-large-cc25-finetuned-hi-to-en | c5c491e11a0ad9ce008b782c0ac3f37614f5262b | 2022-05-11T08:57:01.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:hindi_english_machine_translation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | prashanth | null | prashanth/mbart-large-cc25-finetuned-hi-to-en | 0 | null | transformers | 37,349 | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
model-index:
- name: mbart-large-cc25-finetuned-hi-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
theojolliffe/bart-cnn-pubmed-arxiv-v3-e16 | db003bf5fd4f1e8bb3b736a548aadfffb4cc0835 | 2022-05-08T17:05:11.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-v3-e16 | 0 | null | transformers | 37,350 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-v3-e16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-v3-e16
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9340
- Rouge1: 57.6388
- Rouge2: 44.834
- Rougel: 47.5043
- Rougelsum: 56.1122
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2407 | 1.0 | 795 | 0.9270 | 53.3842 | 33.8559 | 35.7393 | 50.6907 | 142.0 |
| 0.704 | 2.0 | 1590 | 0.8092 | 53.2159 | 35.0209 | 37.8641 | 50.9514 | 141.963 |
| 0.5277 | 3.0 | 2385 | 0.7588 | 52.7709 | 34.2453 | 36.6319 | 50.1137 | 142.0 |
| 0.3449 | 4.0 | 3180 | 0.7617 | 52.0249 | 34.5679 | 37.3669 | 49.7643 | 142.0 |
| 0.2668 | 5.0 | 3975 | 0.7575 | 54.3131 | 35.3985 | 38.9242 | 51.5667 | 142.0 |
| 0.1756 | 6.0 | 4770 | 0.8161 | 53.6214 | 36.4376 | 39.1745 | 51.3685 | 142.0 |
| 0.1326 | 7.0 | 5565 | 0.7848 | 55.7549 | 38.8517 | 42.0106 | 53.4243 | 142.0 |
| 0.1051 | 8.0 | 6360 | 0.7912 | 55.2709 | 39.952 | 42.7398 | 53.6479 | 142.0 |
| 0.0781 | 9.0 | 7155 | 0.8491 | 55.5698 | 40.0599 | 42.9521 | 53.6734 | 142.0 |
| 0.0685 | 10.0 | 7950 | 0.8684 | 55.1142 | 40.3136 | 43.699 | 53.5463 | 142.0 |
| 0.0494 | 11.0 | 8745 | 0.8886 | 57.7988 | 43.6659 | 46.0913 | 56.3383 | 142.0 |
| 0.0338 | 12.0 | 9540 | 0.8827 | 57.0166 | 42.7553 | 46.2344 | 55.2893 | 142.0 |
| 0.0296 | 13.0 | 10335 | 0.9111 | 56.7741 | 42.6116 | 45.1692 | 55.2065 | 142.0 |
| 0.0228 | 14.0 | 11130 | 0.9209 | 56.635 | 43.2461 | 46.314 | 55.049 | 142.0 |
| 0.0189 | 15.0 | 11925 | 0.9193 | 56.4404 | 43.4216 | 46.279 | 55.1403 | 142.0 |
| 0.0152 | 16.0 | 12720 | 0.9340 | 57.6388 | 44.834 | 47.5043 | 56.1122 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-6_H-512_A-8_wiki103 | 5c78785a750e5e064bb87569e0b05d31d8f4507e | 2022-05-08T14:26:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-512_A-8_wiki103 | 0 | null | transformers | 37,351 | Entry not found |
lilitket/20220508-183654 | d384c835474c8b2b4c506070c5b05a53a7ed353f | 2022-05-08T18:56:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220508-183654 | 0 | null | transformers | 37,352 | Entry not found |
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e32 | 0344e8f989e77e15f320082d7a452e993ec33893 | 2022-05-08T18:40:29.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e32 | 0 | null | transformers | 37,353 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-v3-e32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e32
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9707
- Rouge1: 58.6575
- Rouge2: 47.1055
- Rougel: 50.0715
- Rougelsum: 57.58
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 0.9589 | 52.4374 | 32.0538 | 34.189 | 49.8178 | 142.0 |
| 1.1222 | 2.0 | 796 | 0.8144 | 54.363 | 35.2782 | 37.5982 | 51.9121 | 142.0 |
| 0.6686 | 3.0 | 1194 | 0.7747 | 53.3334 | 34.9112 | 38.1684 | 50.9676 | 142.0 |
| 0.4394 | 4.0 | 1592 | 0.7660 | 53.2391 | 34.1677 | 38.4917 | 50.582 | 142.0 |
| 0.4394 | 5.0 | 1990 | 0.7508 | 54.3922 | 36.631 | 39.6881 | 52.4238 | 142.0 |
| 0.2962 | 6.0 | 2388 | 0.8112 | 53.9595 | 36.1326 | 38.937 | 51.8107 | 142.0 |
| 0.201 | 7.0 | 2786 | 0.7842 | 55.3659 | 38.4021 | 41.1556 | 53.3145 | 142.0 |
| 0.1414 | 8.0 | 3184 | 0.7557 | 54.8476 | 38.7707 | 41.8756 | 53.3081 | 142.0 |
| 0.107 | 9.0 | 3582 | 0.8296 | 55.7594 | 39.3691 | 41.6456 | 53.9381 | 142.0 |
| 0.107 | 10.0 | 3980 | 0.8298 | 54.8163 | 38.9233 | 42.4104 | 52.9344 | 142.0 |
| 0.0838 | 11.0 | 4378 | 0.8492 | 56.3438 | 41.5532 | 44.6348 | 54.6106 | 141.8704 |
| 0.0637 | 12.0 | 4776 | 0.8619 | 56.8559 | 41.2682 | 43.4566 | 54.7799 | 142.0 |
| 0.051 | 13.0 | 5174 | 0.8733 | 57.4154 | 42.6009 | 44.401 | 56.0209 | 142.0 |
| 0.04 | 14.0 | 5572 | 0.8777 | 58.3095 | 44.7657 | 47.8527 | 56.7276 | 142.0 |
| 0.04 | 15.0 | 5970 | 0.8711 | 57.6542 | 43.1785 | 46.3796 | 56.0532 | 142.0 |
| 0.0341 | 16.0 | 6368 | 0.9038 | 57.7274 | 43.5198 | 45.8797 | 56.1525 | 142.0 |
| 0.0272 | 17.0 | 6766 | 0.8845 | 58.4461 | 44.9513 | 47.6616 | 57.0634 | 142.0 |
| 0.0231 | 18.0 | 7164 | 0.9108 | 58.5774 | 46.2637 | 49.9201 | 57.1939 | 141.963 |
| 0.018 | 19.0 | 7562 | 0.9059 | 58.7442 | 44.7141 | 47.6061 | 57.3604 | 142.0 |
| 0.018 | 20.0 | 7960 | 0.9133 | 57.2809 | 43.7722 | 46.2016 | 55.4421 | 142.0 |
| 0.0159 | 21.0 | 8358 | 0.9245 | 57.1685 | 44.5445 | 48.5015 | 55.9304 | 142.0 |
| 0.012 | 22.0 | 8756 | 0.9149 | 57.4727 | 44.2417 | 48.0224 | 56.1341 | 141.9444 |
| 0.0109 | 23.0 | 9154 | 0.9472 | 58.3537 | 45.2341 | 47.8222 | 56.8061 | 141.8148 |
| 0.0082 | 24.0 | 9552 | 0.9426 | 58.1553 | 45.6645 | 49.019 | 56.7908 | 142.0 |
| 0.0082 | 25.0 | 9950 | 0.9407 | 58.3571 | 46.0699 | 49.382 | 57.1456 | 142.0 |
| 0.0071 | 26.0 | 10348 | 0.9654 | 59.5689 | 47.2126 | 50.5317 | 58.2492 | 142.0 |
| 0.0057 | 27.0 | 10746 | 0.9651 | 58.2261 | 46.2797 | 49.8995 | 57.0725 | 142.0 |
| 0.0049 | 28.0 | 11144 | 0.9555 | 57.3502 | 44.2364 | 47.6214 | 55.69 | 142.0 |
| 0.0043 | 29.0 | 11542 | 0.9591 | 57.3909 | 44.5927 | 47.541 | 56.2071 | 142.0 |
| 0.0043 | 30.0 | 11940 | 0.9637 | 58.3275 | 46.1513 | 49.4288 | 57.073 | 142.0 |
| 0.0033 | 31.0 | 12338 | 0.9705 | 58.4669 | 46.613 | 49.5711 | 57.3531 | 142.0 |
| 0.0031 | 32.0 | 12736 | 0.9707 | 58.6575 | 47.1055 | 50.0715 | 57.58 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-12_H-768_A-12_wiki103 | 4ca99c1763717a1be7db970259fafb889814cad7 | 2022-05-08T16:01:04.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-12_H-768_A-12_wiki103 | 0 | null | transformers | 37,354 | Entry not found |
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e32 | 18827069e2b1c1196de6ffbdb0235dd1f25d256f | 2022-05-08T20:42:06.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e32 | 0 | null | transformers | 37,355 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e32
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9622
- Rouge1: 58.4519
- Rouge2: 45.6847
- Rougel: 49.3188
- Rougelsum: 57.1351
- Gen Len: 141.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.4924 | 1.0 | 795 | 1.0924 | 52.3565 | 32.9081 | 34.6648 | 49.6351 | 142.0 |
| 0.8865 | 2.0 | 1590 | 0.9394 | 54.2962 | 35.9725 | 38.3888 | 51.5708 | 140.9815 |
| 0.6979 | 3.0 | 2385 | 0.8831 | 53.6795 | 35.226 | 37.4988 | 51.4424 | 141.8704 |
| 0.4868 | 4.0 | 3180 | 0.8457 | 53.9141 | 35.2212 | 37.6423 | 51.63 | 142.0 |
| 0.3903 | 5.0 | 3975 | 0.8252 | 54.8908 | 36.8468 | 39.072 | 52.6068 | 141.8704 |
| 0.2725 | 6.0 | 4770 | 0.8338 | 54.2424 | 36.4675 | 39.6312 | 51.9973 | 142.0 |
| 0.2177 | 7.0 | 5565 | 0.8224 | 54.0085 | 36.9395 | 39.7131 | 51.8476 | 142.0 |
| 0.1736 | 8.0 | 6360 | 0.8001 | 55.5106 | 38.8828 | 41.7174 | 53.3171 | 141.7222 |
| 0.1368 | 9.0 | 7155 | 0.8036 | 56.7284 | 40.8327 | 42.8486 | 54.6505 | 141.8519 |
| 0.1272 | 10.0 | 7950 | 0.8197 | 54.5703 | 38.5037 | 41.591 | 52.4417 | 141.2963 |
| 0.0977 | 11.0 | 8745 | 0.8463 | 55.3691 | 40.5406 | 43.9156 | 53.6637 | 141.7593 |
| 0.0768 | 12.0 | 9540 | 0.8467 | 56.7099 | 41.6472 | 44.8171 | 54.8111 | 142.0 |
| 0.0702 | 13.0 | 10335 | 0.8488 | 56.6646 | 41.2164 | 43.8938 | 54.7209 | 142.0 |
| 0.0597 | 14.0 | 11130 | 0.8543 | 55.7245 | 40.9593 | 42.5698 | 53.8763 | 142.0 |
| 0.0514 | 15.0 | 11925 | 0.8567 | 56.4837 | 41.8224 | 44.5484 | 54.9102 | 142.0 |
| 0.045 | 16.0 | 12720 | 0.8794 | 57.5862 | 43.4725 | 46.3658 | 55.9579 | 142.0 |
| 0.0367 | 17.0 | 13515 | 0.8974 | 57.1023 | 42.9042 | 45.8444 | 55.2216 | 142.0 |
| 0.0346 | 18.0 | 14310 | 0.9143 | 57.7781 | 43.8333 | 47.0943 | 56.0032 | 142.0 |
| 0.03 | 19.0 | 15105 | 0.9044 | 56.9211 | 41.9678 | 44.5081 | 54.8092 | 141.6667 |
| 0.0241 | 20.0 | 15900 | 0.9109 | 57.7747 | 44.1122 | 46.5743 | 55.9199 | 141.8148 |
| 0.0225 | 21.0 | 16695 | 0.9180 | 56.2307 | 42.2787 | 45.602 | 54.6285 | 142.0 |
| 0.0184 | 22.0 | 17490 | 0.9120 | 57.4024 | 43.657 | 46.5646 | 55.4614 | 142.0 |
| 0.0182 | 23.0 | 18285 | 0.9262 | 57.292 | 42.8935 | 46.1294 | 55.3741 | 141.963 |
| 0.016 | 24.0 | 19080 | 0.9268 | 58.2018 | 44.3914 | 47.7056 | 56.4628 | 142.0 |
| 0.0139 | 25.0 | 19875 | 0.9373 | 58.1187 | 44.7233 | 47.8946 | 56.26 | 142.0 |
| 0.0125 | 26.0 | 20670 | 0.9300 | 57.8399 | 44.3073 | 48.4549 | 56.1325 | 141.8889 |
| 0.012 | 27.0 | 21465 | 0.9487 | 57.8585 | 43.8361 | 47.6488 | 56.2748 | 142.0 |
| 0.0095 | 28.0 | 22260 | 0.9620 | 57.5966 | 44.0481 | 46.8771 | 56.079 | 141.6852 |
| 0.009 | 29.0 | 23055 | 0.9526 | 57.8869 | 44.2234 | 48.0884 | 56.3158 | 141.9815 |
| 0.008 | 30.0 | 23850 | 0.9626 | 58.2649 | 45.0371 | 48.5288 | 56.7707 | 141.9815 |
| 0.0076 | 31.0 | 24645 | 0.9640 | 58.1467 | 45.0457 | 48.7258 | 56.7111 | 141.3704 |
| 0.0072 | 32.0 | 25440 | 0.9622 | 58.4519 | 45.6847 | 49.3188 | 57.1351 | 141.9815 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-6_H-256_A-4_wiki103 | 03c2a9f0ce0985509a65b94e3ad6cd05f0c9fdcd | 2022-05-08T18:29:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-256_A-4_wiki103 | 0 | null | transformers | 37,356 | Entry not found |
subhasisj/Ar-Mulitlingula-MiniLM | a861604fa75e713c8595e3dba945cffdbef3eadf | 2022-05-08T21:26:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/Ar-Mulitlingula-MiniLM | 0 | null | transformers | 37,357 | Ar-Mulitlingual-MiniLM
This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on an unknown dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 5e-05
train_batch_size: 24
eval_batch_size: 8
seed: 42
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
num_epochs: 2
mixed_precision_training: Native AMP
Training results
Framework versions
Transformers 4.18.0
Pytorch 1.11.0+cu113
Tokenizers 0.12.1 |
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e12 | 9ee7f4589573b81b71f0540e0f325664988e404e | 2022-05-09T08:38:28.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e12 | 0 | null | transformers | 37,358 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e12
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8157
- Rouge1: 56.7429
- Rouge2: 41.0185
- Rougel: 44.1014
- Rougelsum: 54.8121
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.5037 | 1.0 | 795 | 1.0815 | 52.4727 | 33.4915 | 35.3774 | 50.1955 | 142.0 |
| 0.8894 | 2.0 | 1590 | 0.9462 | 52.8867 | 34.0406 | 36.5249 | 50.4636 | 141.5741 |
| 0.7037 | 3.0 | 2385 | 0.8841 | 53.7966 | 35.0969 | 38.4158 | 51.3369 | 142.0 |
| 0.4914 | 4.0 | 3180 | 0.8437 | 52.6766 | 34.0573 | 36.8907 | 50.3088 | 142.0 |
| 0.3945 | 5.0 | 3975 | 0.8067 | 54.3147 | 36.2081 | 39.6366 | 52.1494 | 142.0 |
| 0.2799 | 6.0 | 4770 | 0.8403 | 54.2813 | 37.0786 | 39.9196 | 51.9176 | 141.9815 |
| 0.2211 | 7.0 | 5565 | 0.8207 | 53.9403 | 36.517 | 39.0372 | 51.4491 | 141.9815 |
| 0.1795 | 8.0 | 6360 | 0.8014 | 55.6607 | 39.3082 | 41.8295 | 53.4674 | 142.0 |
| 0.1428 | 9.0 | 7155 | 0.8051 | 55.0575 | 38.823 | 41.8849 | 52.9606 | 142.0 |
| 0.1358 | 10.0 | 7950 | 0.8149 | 56.6986 | 41.0 | 43.5207 | 54.6402 | 142.0 |
| 0.1122 | 11.0 | 8745 | 0.8134 | 56.5416 | 40.9495 | 44.2989 | 54.5623 | 142.0 |
| 0.0873 | 12.0 | 9540 | 0.8157 | 56.7429 | 41.0185 | 44.1014 | 54.8121 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
negfir/bert_uncased_L-6_H-128_A-2_wiki103 | 814a1e3af347d97b805010ea43aefed5c3e77435 | 2022-05-08T21:49:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-6_H-128_A-2_wiki103 | 0 | null | transformers | 37,359 | Entry not found |
nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum | 9fb21bba23fc6772ea5c50c5e1b051c2ed3cc29b | 2022-05-30T11:58:22.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mT5_multilingual_XLSum-finetuned-xsum | 0 | null | transformers | 37,360 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-xsum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 36479 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
stevemobs/roberta-large-fine-tuned-squad-es | ed1af254c1b5c22fd4be20ed9d171cf2f419473a | 2022-05-09T09:43:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/roberta-large-fine-tuned-squad-es | 0 | null | transformers | 37,361 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: roberta-large-fine-tuned-squad-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-fine-tuned-squad-es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on the squad_es dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lilitket/20220509-033828 | 87868cbb2d29dea4f255b7838619952ad7dac5bc | 2022-05-10T09:23:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | lilitket | null | lilitket/20220509-033828 | 0 | null | transformers | 37,362 | Entry not found |
huggingtweets/auto_nietzsche | 2bc411c7b77b6b1985454b307e4d367306c7e9d4 | 2022-05-09T04:34:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/auto_nietzsche | 0 | null | transformers | 37,363 | ---
language: en
thumbnail: http://www.huggingtweets.com/auto_nietzsche/1652070864000/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1294860316078223360/uznHCd3p_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Friedrich Nietszche Bot</div>
<div style="text-align: center; font-size: 14px;">@auto_nietzsche</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Friedrich Nietszche Bot.
| Data | Friedrich Nietszche Bot |
| --- | --- |
| Tweets downloaded | 48 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 48 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f29d5tl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @auto_nietzsche's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3iito7lq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3iito7lq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/auto_nietzsche')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jamesliao333 | e16259c206edb61d0446a51ab4dee453694ef877 | 2022-05-09T05:49:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jamesliao333 | 0 | null | transformers | 37,364 | ---
language: en
thumbnail: http://www.huggingtweets.com/jamesliao333/1652075372352/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1522973288288333825/NhsZowLa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision"</div>
<div style="text-align: center; font-size: 14px;">@jamesliao333</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision".
| Data | DON XMCA//素 Vitamin(RNG) 🦀 "MILLENNIUM 定制 Vision" |
| --- | --- |
| Tweets downloaded | 202 |
| Retweets | 37 |
| Short tweets | 16 |
| Tweets kept | 149 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ed1hlxcu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jamesliao333's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mfrtr3lf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mfrtr3lf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jamesliao333')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e8 | 228ad568903ecd93b86d22779899c5d589f39d7d | 2022-05-09T08:48:07.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e8 | 0 | null | transformers | 37,365 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e8
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8329
- Rouge1: 53.3047
- Rouge2: 34.6219
- Rougel: 37.6148
- Rougelsum: 50.8973
- Gen Len: 141.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 398 | 1.1211 | 50.4753 | 30.5417 | 33.192 | 48.1321 | 141.8704 |
| 1.3657 | 2.0 | 796 | 0.9944 | 52.2197 | 33.6109 | 35.9448 | 50.0028 | 141.6111 |
| 0.887 | 3.0 | 1194 | 0.9149 | 52.796 | 33.7683 | 36.4941 | 50.4514 | 141.5926 |
| 0.6548 | 4.0 | 1592 | 0.8725 | 52.5353 | 33.4019 | 36.4573 | 50.2506 | 142.0 |
| 0.6548 | 5.0 | 1990 | 0.8540 | 53.2987 | 34.6476 | 38.314 | 51.163 | 141.4815 |
| 0.504 | 6.0 | 2388 | 0.8395 | 52.7218 | 34.6524 | 37.9921 | 50.5185 | 141.5556 |
| 0.4006 | 7.0 | 2786 | 0.8342 | 53.2251 | 35.2702 | 38.3763 | 51.1958 | 141.6667 |
| 0.3314 | 8.0 | 3184 | 0.8329 | 53.3047 | 34.6219 | 37.6148 | 50.8973 | 141.8704 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e16 | 1959f3c2cf36cc5adbf653af20c3ea550858b7aa | 2022-05-09T10:37:42.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/distilbart-cnn-arxiv-pubmed-v3-e16 | 0 | null | transformers | 37,366 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-arxiv-pubmed-v3-e16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-arxiv-pubmed-v3-e16
This model is a fine-tuned version of [theojolliffe/distilbart-cnn-arxiv-pubmed](https://huggingface.co/theojolliffe/distilbart-cnn-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8502
- Rouge1: 57.1726
- Rouge2: 42.87
- Rougel: 44.7485
- Rougelsum: 55.6955
- Gen Len: 141.5926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.4961 | 1.0 | 795 | 1.0907 | 53.2509 | 33.4232 | 34.4499 | 50.987 | 142.0 |
| 0.8874 | 2.0 | 1590 | 0.9408 | 52.9708 | 34.499 | 36.537 | 50.3924 | 140.4074 |
| 0.6994 | 3.0 | 2385 | 0.8731 | 53.4488 | 34.2476 | 37.4579 | 51.1979 | 142.0 |
| 0.4883 | 4.0 | 3180 | 0.8521 | 53.5463 | 34.7519 | 37.8143 | 51.106 | 142.0 |
| 0.3923 | 5.0 | 3975 | 0.8227 | 53.3556 | 35.0361 | 37.1719 | 50.9195 | 141.2222 |
| 0.2727 | 6.0 | 4770 | 0.8323 | 54.8422 | 37.333 | 39.6388 | 52.2975 | 141.8148 |
| 0.2158 | 7.0 | 5565 | 0.8252 | 54.0343 | 36.0109 | 38.34 | 51.6282 | 142.0 |
| 0.1734 | 8.0 | 6360 | 0.7985 | 54.9597 | 38.283 | 41.0033 | 52.9537 | 142.0 |
| 0.1366 | 9.0 | 7155 | 0.8112 | 56.315 | 40.3948 | 42.2944 | 54.3719 | 142.0 |
| 0.1275 | 10.0 | 7950 | 0.8238 | 55.8688 | 39.4747 | 43.0286 | 53.9269 | 142.0 |
| 0.0978 | 11.0 | 8745 | 0.8345 | 54.9934 | 40.0148 | 42.2721 | 53.324 | 142.0 |
| 0.0738 | 12.0 | 9540 | 0.8322 | 56.3862 | 41.4322 | 44.1406 | 54.4768 | 142.0 |
| 0.0688 | 13.0 | 10335 | 0.8384 | 55.9261 | 40.7102 | 43.5825 | 54.2394 | 142.0 |
| 0.0587 | 14.0 | 11130 | 0.8435 | 56.8475 | 41.7188 | 44.0671 | 54.9813 | 142.0 |
| 0.0529 | 15.0 | 11925 | 0.8476 | 57.4678 | 42.3804 | 45.4776 | 55.746 | 142.0 |
| 0.0469 | 16.0 | 12720 | 0.8502 | 57.1726 | 42.87 | 44.7485 | 55.6955 | 141.5926 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
subhasisj/ar-pretrained-squad-qa-minilmv2-8 | e8f1c19c48323106e0c52ad6cb28ebdef6bfb56a | 2022-05-09T14:00:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | subhasisj | null | subhasisj/ar-pretrained-squad-qa-minilmv2-8 | 0 | null | transformers | 37,367 | Entry not found |
masakhane/afrimt5_en_pcm_news | f06ddb3a3b9b3c9328931caa73b36bd2a86513c9 | 2022-05-10T11:17:37.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_en_pcm_news | 0 | null | transformers | 37,368 | ---
license: afl-3.0
---
|
masakhane/afrimt5_pcm_en_news | 073b758d05323052b06efdf83e083bcd45ccb108 | 2022-05-10T11:17:40.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimt5_pcm_en_news | 0 | null | transformers | 37,369 | ---
license: afl-3.0
---
|
masakhane/afrimbart_en_pcm_news | 671433834f1cf55967eadea929b5eab9484155fc | 2022-05-10T11:17:43.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afrimbart_en_pcm_news | 0 | null | transformers | 37,370 | ---
license: afl-3.0
---
|
huxxx657/roberta-base-finetuned-squad-1 | 32054ce9e6fdbc94af78301e0ca2a437a11d6da8 | 2022-05-09T14:38:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-squad-1 | 0 | null | transformers | 37,371 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad-1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9043 | 1.0 | 5536 | 0.8852 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
guhuawuli/distilbert-base-uncased-finetuned-squad | b907adf940240548eafa626fca242f403cd204f7 | 2022-05-09T13:55:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | guhuawuli | null | guhuawuli/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 37,372 | Entry not found |
huxxx657/roberta-base-finetuned-squad-2 | cc8db5b48334b87ad2fcff68483d5fc8ad4ca826 | 2022-05-09T15:58:57.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-squad-2 | 0 | null | transformers | 37,373 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9519 | 1.0 | 5536 | 5.9506 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jxm/wikibio_document_vanilla | 158fcb37fb3fafa5660b9638c721c2348e3b0118 | 2022-05-09T16:28:33.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | jxm | null | jxm/wikibio_document_vanilla | 0 | null | transformers | 37,374 | Entry not found |
huggingtweets/schizo_freq | ab7175dd15dff0e6f2c8e3c86404ca6e78c524ac | 2022-05-09T17:50:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/schizo_freq | 0 | 1 | transformers | 37,375 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433499110091501570/S3JJ9GdR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lukas (computer)</div>
<div style="text-align: center; font-size: 14px;">@schizo_freq</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lukas (computer).
| Data | Lukas (computer) |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 89 |
| Short tweets | 374 |
| Tweets kept | 2779 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sl6w62wh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @schizo_freq's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gbonemg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gbonemg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/schizo_freq')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
subhasisj/zh-TAPT-MLM-MiniLM | ad50bbdaa8544df2d4912d63ed4e4a1242c01bfe | 2022-05-15T19:26:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | subhasisj | null | subhasisj/zh-TAPT-MLM-MiniLM | 0 | null | transformers | 37,376 | ---
tags:
- generated_from_trainer
model-index:
- name: zh-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zh-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huxxx657/roberta-base-finetuned-squad-3 | e583a3942b44d69730044f0caaa2db4be794d978 | 2022-05-10T01:09:48.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | huxxx657 | null | huxxx657/roberta-base-finetuned-squad-3 | 0 | null | transformers | 37,377 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad-3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8626 | 1.0 | 5536 | 0.8358 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nateraw/segformer-finetuned-sidewalk-10k-steps | df1a5723c23d5da01d3b8cef8b57c9332fa82afe | 2022-05-10T02:51:29.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers"
] | null | false | nateraw | null | nateraw/segformer-finetuned-sidewalk-10k-steps | 0 | null | transformers | 37,378 | Entry not found |
suicaokhoailang/gpt-neo-vi-comments-finetuned | 5646c82332c0fa6f42b5b09dd3e90e11400ce6d8 | 2022-05-10T05:19:54.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | suicaokhoailang | null | suicaokhoailang/gpt-neo-vi-comments-finetuned | 0 | null | transformers | 37,379 | ---
license: mit
---
GPT-Neo-small for Vietnamese
Based on [NlpHUST/gpt-neo-vi-small](https://huggingface.co/NlpHUST/gpt-neo-vi-small), finetuned on dataset of [10m Facebook comments](https://github.com/binhvq/news-corpus)
|
kornosk/polibertweet-political-twitter-roberta-mlm-small | ec5d914a37330da351b932a82070ac06eb41d068 | 2022-05-10T03:49:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"twitter",
"masked-token-prediction",
"bertweet",
"election2020",
"politics",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | kornosk | null | kornosk/polibertweet-political-twitter-roberta-mlm-small | 0 | null | transformers | 37,380 | ---
language: "en"
tags:
- twitter
- masked-token-prediction
- bertweet
- election2020
- politics
license: "gpl-3.0"
---
# This version is trained on a smaller data set.
See the full-size version at [PoliBERTweet](https://huggingface.co/kornosk/polibertweet-mlm).
# Citation
```bibtex
@inproceedings{kawintiranon2022polibertweet,
title = {PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter},
author = {Kawintiranon, Kornraphop and Singh, Lisa},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
year = {2022},
publisher = {European Language Resources Association}
}
``` |
fujiki/t5-11b-en2ja | 53b5a66410213debf3db41588aded87b309ca59e | 2022-05-10T05:43:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-11b-en2ja | 0 | null | transformers | 37,381 | Entry not found |
nateraw/vit-lucidrains-dummy | 6e22deb3e54d371ae8dd75cab0434a6b22b24ea0 | 2022-05-10T06:10:17.000Z | [
"pytorch"
] | null | false | nateraw | null | nateraw/vit-lucidrains-dummy | 0 | null | null | 37,382 | Entry not found |
guhuawuli/opus-mt-en-ro-finetuned-en-to-ro | 14664f681461140f5db1ff59e551823c2f85307e | 2022-05-10T07:19:23.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | guhuawuli | null | guhuawuli/opus-mt-en-ro-finetuned-en-to-ro | 0 | null | transformers | 37,383 | Entry not found |
masakhane/afribyt5_pcm_en_news | 41e98287bda5380f2de3f32db999a7b220514b85 | 2022-05-10T11:26:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_pcm_en_news | 0 | null | transformers | 37,384 | ---
license: afl-3.0
---
|
masakhane/afribyt5_en_pcm_news | c879410f8987d3747c2df89f6db03dc65d89ff42 | 2022-05-10T11:26:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/afribyt5_en_pcm_news | 0 | null | transformers | 37,385 | ---
license: afl-3.0
---
|
masakhane/byt5_en_pcm_news | 4bea60137e03076ca5946c4b2c40572c6d2d5521 | 2022-05-10T11:26:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_en_pcm_news | 0 | null | transformers | 37,386 | ---
license: afl-3.0
---
|
masakhane/byt5_pcm_en_news | ae8068f9eb92b50ac9e04e35e99c12bc895e27cf | 2022-05-10T11:26:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/byt5_pcm_en_news | 0 | null | transformers | 37,387 | ---
license: afl-3.0
---
|
masakhane/mt5_pcm_en_news | d37daa3c60c177e9be590c53e5351973b016cb22 | 2022-05-10T11:36:02.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mt5_pcm_en_news | 0 | null | transformers | 37,388 | ---
license: afl-3.0
---
|
masakhane/mt5_en_pcm_news | 2e20cd736031e826247d10de813942f0aaaef965 | 2022-05-10T11:38:12.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mt5_en_pcm_news | 0 | null | transformers | 37,389 | ---
license: afl-3.0
---
|
masakhane/mbart50_en_pcm_news | f53587c7374ea546ca741b7c26fe74bddf3a7240 | 2022-05-10T11:36:23.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_en_pcm_news | 0 | null | transformers | 37,390 | ---
license: afl-3.0
---
|
masakhane/mbart50_pcm_en_news | 615a5ad9f5d856877c72ba90a13f3eb7524186b6 | 2022-05-10T11:38:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/mbart50_pcm_en_news | 0 | null | transformers | 37,391 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_news | 0207274af6f4692945010aefcc97ccb25e758b60 | 2022-05-10T11:47:52.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_pcm_news | 0 | null | transformers | 37,392 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_news | 9d81b16ee5205caf6c8b10c2680997a63e9439f0 | 2022-05-10T11:47:58.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_pcm_en_rel_news | 0 | null | transformers | 37,393 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_news | 94d7089e0b4f116a941db5f55cddb00ee3577f64 | 2022-05-10T11:48:02.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_pcm_rel_news | 0 | null | transformers | 37,394 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_news_ft | ba74a0247ef53241c0d238767a754bb9b93c8ffc | 2022-05-10T11:57:01.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_pcm_rel_news_ft | 0 | null | transformers | 37,395 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_news_ft | 1fc138257f8ac2fac0967b282bf328c1756b78f3 | 2022-05-10T11:57:17.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_pcm_en_rel_news_ft | 0 | null | transformers | 37,396 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_en_pcm_rel_ft | b9aa1d8457639b6b4d656bb208ca4345e2f3b003 | 2022-05-10T11:57:06.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_en_pcm_rel_ft | 0 | null | transformers | 37,397 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel_ft | 0cad5ba2ce81013f09306baa91ebb99887081c91 | 2022-05-10T11:57:10.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_pcm_en_rel_ft | 0 | null | transformers | 37,398 | ---
license: afl-3.0
---
|
masakhane/m2m100_418M_pcm_en_rel | bc2afe4634360c55ea334fc9d5e13430a67f6f81 | 2022-05-10T12:01:23.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | masakhane | null | masakhane/m2m100_418M_pcm_en_rel | 0 | null | transformers | 37,399 | ---
license: afl-3.0
---
|
Subsets and Splits