modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
buvnswrn/daml-t5-training | d2d62fa9c95904557dcc14969c7e821a4e12c4e4 | 2022-04-11T05:18:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | buvnswrn | null | buvnswrn/daml-t5-training | 2 | null | transformers | 25,300 | Entry not found |
scasutt/wav2vec2-large-xlsr-53-swiss-german_toy_train_data_augment_0.1 | c643dbfa2be2a26a25571b61952f0fa7a4c7bb2e | 2022-03-26T04:39:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53-swiss-german_toy_train_data_augment_0.1 | 2 | null | transformers | 25,301 | Entry not found |
rsmonteiro/gpt2-small-portuguese-lyrics | 54f46463ade8c11d5bd2bb572736dc6a3b54a373 | 2022-05-09T22:27:17.000Z | [
"pytorch",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"pt",
"transformers",
"license:mit"
] | text-generation | false | rsmonteiro | null | rsmonteiro/gpt2-small-portuguese-lyrics | 2 | 1 | transformers | 25,302 | ---
language: pt
license: mit
---
# GPT-2 Small Portuguese Lyrics
Pretrained model from lyrics dataset in Portuguese.
## Model description
The model was trained from a Kaggle Dataset, [“Song lyrics from 6 musical genres”](https://www.kaggle.com/neisse/scrapped-lyrics-from-6-genres/version/2), with around 66,000 songs in portuguese.
The model was fine-tuned from [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on Colab Pro+ enviroment.
<!---
## Intended uses & limitations
### How to use
### Limitations and bias
## Training data
## Training procedure
### Preprocessing
### BibTeX entry and citation info
-->
## Evaluation results
| Loss | Perplexity | Training Duration |
|:--------:|:----------:|:-----------------:|
|3,301 | 27,15 | 06:45:09 |
|
eliasws/openApiT5-labeled-v1 | fd6b5aeafba9fbbf6c1f8d0ad38e0f58b200863f | 2022-03-26T15:33:23.000Z | [
"pytorch",
"t5",
"transformers"
] | null | false | eliasws | null | eliasws/openApiT5-labeled-v1 | 2 | null | transformers | 25,303 | Entry not found |
sanchit-gandhi/wav2vec2-2-bart-large-cnn-no-adapter | a62d17b59cb138959eefc4cd8fdd70ed1ec4ef45 | 2022-03-28T11:26:30.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bart-large-cnn-no-adapter | 2 | null | transformers | 25,304 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9938
- Wer: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9301 | 2.24 | 500 | 4.6291 | 0.9601 |
| 4.4562 | 4.48 | 1000 | 4.3604 | 0.9608 |
| 3.8356 | 6.73 | 1500 | 4.0728 | 0.9530 |
| 3.2716 | 8.97 | 2000 | 3.9938 | 0.9745 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
yy642/bert-base-uncased-finetuned-mnli-512-10 | f8efb05b1bae8d29a5eb13d391e899b60d33b59e | 2022-03-27T11:06:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-512-10 | 2 | null | transformers | 25,305 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-512-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9355947399880454
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-512-10
This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-512-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-512-5) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4991
- Accuracy: 0.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0514 | 1.0 | 16363 | 0.4557 | 0.9265 |
| 0.0369 | 2.0 | 32726 | 0.4548 | 0.9323 |
| 0.0249 | 3.0 | 49089 | 0.4376 | 0.9320 |
| 0.0197 | 4.0 | 65452 | 0.4991 | 0.9356 |
| 0.0135 | 5.0 | 81815 | 0.5424 | 0.9341 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
SAGAR4REAL/wav2vec2-large-hindicone | 56b03324a4b8f3464f3cb5d49cff0cdcc4c6a988 | 2022-03-27T16:20:28.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | SAGAR4REAL | null | SAGAR4REAL/wav2vec2-large-hindicone | 2 | null | transformers | 25,306 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-hindicone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-hindicone
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
202015004/MY_st1_training_shreya_fixed_27_march_labled-decoded_level2 | 9443f687ddc0a861cd93998e8ab5efcaa6aa5c03 | 2022-03-27T17:05:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/MY_st1_training_shreya_fixed_27_march_labled-decoded_level2 | 2 | null | transformers | 25,307 | Entry not found |
leonadase/bert-base-chinese-finetuned-fdRE | 8cf45c6828461089426e53ecd7ee78dd4f3591f0 | 2022-03-27T20:52:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:sem_eval2010_task8",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | leonadase | null | leonadase/bert-base-chinese-finetuned-fdRE | 2 | null | transformers | 25,308 | ---
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-fdRE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9080962800875274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-fdRE
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 46 | 0.5571 | 0.7812 |
| No log | 2.0 | 92 | 0.4030 | 0.8621 |
| No log | 3.0 | 138 | 0.3139 | 0.8928 |
| No log | 4.0 | 184 | 0.2716 | 0.9081 |
| No log | 5.0 | 230 | 0.2564 | 0.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
andyjennings/xlm-roberta-base-finetuned-panx-de | cb934554afaa40e3558aceedf9862b0fcb3b9f95 | 2022-03-27T22:54:09.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | andyjennings | null | andyjennings/xlm-roberta-base-finetuned-panx-de | 2 | null | transformers | 25,309 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8591260810195721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1512 | 0.8302 |
| 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 |
| 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
21iridescent/distilbert-base-uncased-finetuned-squad | 6e2dc37a7fccbdeb6ee091b817ad73a604aadb25 | 2022-03-28T08:10:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | 21iridescent | null | 21iridescent/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 25,310 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2739 | 1.0 | 4118 | 1.2801 |
| 1.0001 | 2.0 | 8236 | 1.2823 |
| 0.8484 | 3.0 | 12354 | 1.3466 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
timhbach/Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract | e9c4e0fa7c13e15e923d7c83643c0d7ad54e60f0 | 2022-03-28T06:27:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | timhbach | null | timhbach/Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract | 2 | null | transformers | 25,311 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0231
- eval_precision: 0.7448
- eval_recall: 0.75
- eval_f1: 0.7474
- eval_accuracy: 0.9942
- eval_runtime: 61.7618
- eval_samples_per_second: 27.201
- eval_steps_per_second: 3.4
- epoch: 3.0
- step: 5670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Katster/dummy-model | 608f24991860968bce878698ef08ac3b4c70b617 | 2022-03-28T04:13:07.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Katster | null | Katster/dummy-model | 2 | null | transformers | 25,312 | test |
rampasek/prot_bert_bfd_rosetta20aa | 8aa5d9f5b71c9a5c1cce24df3ad91ddbb39afefc | 2022-03-29T04:33:02.000Z | [
"pytorch",
"bert",
"text-classification",
"protein",
"dataset:BFD",
"dataset:Custom Rosetta",
"transformers",
"protein language model"
] | text-classification | false | rampasek | null | rampasek/prot_bert_bfd_rosetta20aa | 2 | null | transformers | 25,313 | ---
language: protein
tags:
- protein language model
datasets:
- BFD
- Custom Rosetta
---
# ProtBert-BFD finetuned on Rosetta 20AA dataset
This model is finetuned to predict Rosetta fold energy using a dataset of 100k 20AA sequences.
Current model in this repo: `prot_bert_bfd-finetuned-032722_1752`
## Performance
- 20AA sequences (1k eval set):\
Metrics: 'mae': 0.090115, 'r2': 0.991208, 'mse': 0.013034, 'rmse': 0.114165
- 40AA sequences (10k eval set):\
Metrics: 'mae': 0.537456, 'r2': 0.659122, 'mse': 0.448607, 'rmse': 0.669781
- 60AA sequences (10k eval set):\
Metrics: 'mae': 0.629267, 'r2': 0.506747, 'mse': 0.622476, 'rmse': 0.788972
## `prot_bert_bfd` from ProtTrans
The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD.
It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
[this repository](https://github.com/agemagician/ProtTrans).
> Created by [Ladislav Rampasek](https://rampasek.github.io)
|
Mads/xlsr-0327 | 3dc2c2e5ff887c94340703cc902d446598bb170a | 2022-03-28T07:22:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Mads | null | Mads/xlsr-0327 | 2 | null | transformers | 25,314 | Entry not found |
SAGAR4REAL/wav2vec2hindia | 9c8978001ff1b3005241f39d0dfdc365a2115d4b | 2022-03-28T08:32:52.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | SAGAR4REAL | null | SAGAR4REAL/wav2vec2hindia | 2 | null | transformers | 25,315 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2hindia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2hindia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
21iridescent/distilroberta-base-finetuned-squad2-lwt | 56b3907e8ac51f53d8f2c02dd730c631d9260a78 | 2022-03-28T11:18:44.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | 21iridescent | null | 21iridescent/distilroberta-base-finetuned-squad2-lwt | 2 | null | transformers | 25,316 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-finetuned-squad2-lwt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad2-lwt
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1702 | 1.0 | 4120 | 1.1220 |
| 0.9787 | 2.0 | 8240 | 1.0500 |
| 0.8153 | 3.0 | 12360 | 1.1356 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
{'HasAns_exact': 71.39001349527665,
'HasAns_f1': 77.71740687727831,
'HasAns_total': 5928,
'NoAns_exact': 68.59545836837678,
'NoAns_f1': 68.59545836837678,
'NoAns_total': 5945,
'best_exact': 69.9991577528847,
'best_exact_thresh': 0.0,
'best_f1': 73.1583245993857,
'best_f1_thresh': 0.0,
'exact': 69.99073528173166,
'f1': 73.1499021282327,
'total': 11873} |
Chikashi/t5-small-finetuned-cnndm | 13cf730d52abe6030eb376fd9156ea6474da5448 | 2022-03-28T14:04:38.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm | 2 | null | transformers | 25,317 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6854
- Rouge1: 24.417
- Rouge2: 11.6924
- Rougel: 20.1756
- Rougelsum: 23.0414
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.8522 | 1.0 | 35890 | 1.6854 | 24.417 | 11.6924 | 20.1756 | 23.0414 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jkooup/abstract_model | fde34cd041d92d93f6833da753971f55660b38b9 | 2022-03-28T10:19:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | jkooup | null | jkooup/abstract_model | 2 | null | transformers | 25,318 | Entry not found |
Gunulhona/tbqgmodel_v2 | 25868c1eb17b61129ec68d8277312005adce228e | 2022-04-25T09:15:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Gunulhona | null | Gunulhona/tbqgmodel_v2 | 2 | null | transformers | 25,319 | Entry not found |
Chikashi/t5-small-finetuned-cnndm1 | 5902b2c79260998505843215395beb0dc15f3e8a | 2022-03-28T22:00:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm1 | 2 | null | transformers | 25,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.4246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6853
- Rouge1: 24.4246
- Rouge2: 11.6944
- Rougel: 20.1717
- Rougelsum: 23.0424
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.912 | 0.14 | 5000 | 1.7167 | 24.4232 | 11.7049 | 20.1758 | 23.0345 | 18.9997 |
| 1.8784 | 0.28 | 10000 | 1.7018 | 24.4009 | 11.6918 | 20.1561 | 23.0073 | 18.9997 |
| 1.8628 | 0.42 | 15000 | 1.6934 | 24.385 | 11.683 | 20.1285 | 22.9823 | 18.9997 |
| 1.8594 | 0.56 | 20000 | 1.6902 | 24.4407 | 11.6835 | 20.1734 | 23.0369 | 18.9996 |
| 1.8537 | 0.7 | 25000 | 1.6864 | 24.3635 | 11.658 | 20.1318 | 22.9782 | 18.9993 |
| 1.8505 | 0.84 | 30000 | 1.6856 | 24.4267 | 11.6991 | 20.1629 | 23.0361 | 18.9994 |
| 1.8505 | 0.98 | 35000 | 1.6853 | 24.4246 | 11.6944 | 20.1717 | 23.0424 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
joniponi/distilbert-base-uncased-finetuned-emotion | 1b1ec0e4471bd5e8d543d3c3e14f72fcdfbdfbb9 | 2022-03-28T19:06:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | joniponi | null | joniponi/distilbert-base-uncased-finetuned-emotion | 2 | null | transformers | 25,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8357
- Accuracy: 0.6309
- F1: 0.6469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9559 | 1.0 | 78 | 0.8585 | 0.6223 | 0.6363 |
| 0.7998 | 2.0 | 156 | 0.8472 | 0.6202 | 0.6354 |
| 0.7207 | 3.0 | 234 | 0.8357 | 0.6309 | 0.6469 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
gayanin/t5-small-med-term-conditional-masking-0 | 1a7bd37632aa9a03703315cf0f9cb1070ca18777 | 2022-03-29T03:19:04.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/t5-small-med-term-conditional-masking-0 | 2 | null | transformers | 25,322 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking-0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6688
- Rouge2 Precision: 0.694
- Rouge2 Recall: 0.4781
- Rouge2 Fmeasure: 0.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9525 | 1.0 | 13915 | 0.8148 | 0.6657 | 0.4581 | 0.5252 |
| 0.8541 | 2.0 | 27830 | 0.7562 | 0.6779 | 0.4694 | 0.5371 |
| 0.8183 | 3.0 | 41745 | 0.7268 | 0.6827 | 0.4722 | 0.5405 |
| 0.8033 | 4.0 | 55660 | 0.7074 | 0.6861 | 0.4729 | 0.5419 |
| 0.7727 | 5.0 | 69575 | 0.6934 | 0.6872 | 0.4726 | 0.5419 |
| 0.7704 | 6.0 | 83490 | 0.6832 | 0.6901 | 0.4742 | 0.544 |
| 0.7485 | 7.0 | 97405 | 0.6771 | 0.6926 | 0.4772 | 0.5469 |
| 0.7528 | 8.0 | 111320 | 0.6722 | 0.6934 | 0.4782 | 0.5478 |
| 0.7535 | 9.0 | 125235 | 0.6696 | 0.6944 | 0.4782 | 0.5481 |
| 0.7444 | 10.0 | 139150 | 0.6688 | 0.694 | 0.4781 | 0.5479 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm_3epoch | e36c43a267309358cc17c52e9337d2e8743eb4b6 | 2022-03-29T19:28:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-cnndm_3epoch | 2 | null | transformers | 25,323 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm_3epoch
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm_3epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6622
- Rouge1: 24.5435
- Rouge2: 11.7919
- Rougel: 20.2929
- Rougelsum: 23.1661
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9113 | 0.14 | 5000 | 1.7162 | 24.4374 | 11.6932 | 20.1741 | 23.0427 | 18.9997 |
| 1.8772 | 0.28 | 10000 | 1.7008 | 24.3715 | 11.6699 | 20.1387 | 22.9772 | 18.9997 |
| 1.8609 | 0.42 | 15000 | 1.6911 | 24.4174 | 11.6986 | 20.1756 | 23.0205 | 18.9997 |
| 1.8564 | 0.56 | 20000 | 1.6871 | 24.4374 | 11.6801 | 20.1663 | 23.0366 | 18.9995 |
| 1.8495 | 0.7 | 25000 | 1.6796 | 24.4019 | 11.6901 | 20.177 | 23.034 | 18.999 |
| 1.8448 | 0.84 | 30000 | 1.6787 | 24.4813 | 11.7227 | 20.1985 | 23.0847 | 18.999 |
| 1.8427 | 0.98 | 35000 | 1.6762 | 24.4905 | 11.7591 | 20.2548 | 23.1006 | 18.9993 |
| 1.8341 | 1.11 | 40000 | 1.6747 | 24.4743 | 11.7124 | 20.1782 | 23.0726 | 18.9996 |
| 1.822 | 1.25 | 45000 | 1.6753 | 24.4797 | 11.7292 | 20.2319 | 23.0816 | 18.9993 |
| 1.8262 | 1.39 | 50000 | 1.6713 | 24.4865 | 11.7079 | 20.2214 | 23.0919 | 18.9986 |
| 1.8281 | 1.53 | 55000 | 1.6702 | 24.5095 | 11.7364 | 20.2534 | 23.1264 | 18.9991 |
| 1.8228 | 1.67 | 60000 | 1.6678 | 24.5153 | 11.7595 | 20.2544 | 23.1138 | 18.9993 |
| 1.824 | 1.81 | 65000 | 1.6662 | 24.5324 | 11.7804 | 20.2671 | 23.1498 | 18.9997 |
| 1.8265 | 1.95 | 70000 | 1.6648 | 24.5795 | 11.7917 | 20.2935 | 23.1855 | 18.9992 |
| 1.8179 | 2.09 | 75000 | 1.6658 | 24.5426 | 11.804 | 20.2861 | 23.1586 | 18.9996 |
| 1.8147 | 2.23 | 80000 | 1.6646 | 24.5429 | 11.7914 | 20.2889 | 23.1542 | 18.9993 |
| 1.8026 | 2.37 | 85000 | 1.6632 | 24.5451 | 11.8045 | 20.2781 | 23.1555 | 18.9996 |
| 1.8141 | 2.51 | 90000 | 1.6643 | 24.5078 | 11.7781 | 20.2631 | 23.121 | 18.9996 |
| 1.8124 | 2.65 | 95000 | 1.6628 | 24.5728 | 11.7958 | 20.2875 | 23.178 | 18.9996 |
| 1.8098 | 2.79 | 100000 | 1.6635 | 24.5534 | 11.7998 | 20.2979 | 23.169 | 18.9996 |
| 1.8153 | 2.93 | 105000 | 1.6622 | 24.5435 | 11.7919 | 20.2929 | 23.1661 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
PSW/test_baseline_epoch_1 | 5aaac9c31b56321f4db3dde0f5c1613b13885d12 | 2022-03-29T01:30:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/test_baseline_epoch_1 | 2 | null | transformers | 25,324 | Entry not found |
beston91/gpt2-xl_ft_logits_5k_experiment | d5cbcce7984fa55004ac99105bef65382122c61c | 2022-03-29T10:27:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_logits_5k_experiment | 2 | null | transformers | 25,325 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_5k_experiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_5k_experiment
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.9 | 7 | 6.1556 |
| No log | 1.9 | 14 | 6.3365 |
| No log | 2.9 | 21 | 6.5909 |
| No log | 3.9 | 28 | 6.8601 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.589759826660156 |
rampasek/prot_bert_bfd_rosetta204060aa | f6db0f4a388c80917cbb863660d28b6e739c6a85 | 2022-03-29T04:35:10.000Z | [
"pytorch",
"bert",
"text-classification",
"protein",
"dataset:BFD",
"dataset:Custom Rosetta",
"transformers",
"protein language model"
] | text-classification | false | rampasek | null | rampasek/prot_bert_bfd_rosetta204060aa | 2 | null | transformers | 25,326 | ---
language: protein
tags:
- protein language model
datasets:
- BFD
- Custom Rosetta
---
# ProtBert-BFD finetuned on Rosetta 20,40,60AA dataset
This model is finetuned to predict Rosetta fold energy using a dataset of 300k protein sequences:
100k of 20AA, 100k of 40AA, and 100k of 60AA
Current model in this repo: `prot_bert_bfd-finetuned-032822_1323`
## Performance
- 20AA sequences (1k eval set):\
Metrics: 'mae': 0.100418, 'r2': 0.989028, 'mse': 0.016266, 'rmse': 0.127537
- 40AA sequences (10k eval set):\
Metrics: 'mae': 0.173888, 'r2': 0.963361, 'mse': 0.048218, 'rmse': 0.219587
- 60AA sequences (10k eval set):\
Metrics: 'mae': 0.235238, 'r2': 0.930164, 'mse': 0.088131, 'rmse': 0.2968
## `prot_bert_bfd` from ProtTrans
The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD.
It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
[this repository](https://github.com/agemagician/ProtTrans).
> Created by [Ladislav Rampasek](https://rampasek.github.io)
|
frtna/jwt300_mt-Italian-to-Spanish_transformers | a9ce7bc63b376d68c3a1beffcc7cf72762270009 | 2022-03-31T11:18:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:new_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | frtna | null | frtna/jwt300_mt-Italian-to-Spanish_transformers | 2 | null | transformers | 25,327 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- new_dataset
metrics:
- sacrebleu
model-index:
- name: jwt300_mt-Italian-to-Spanish_transformers
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: new_dataset
type: new_dataset
args: jwt300_mt
metrics:
- name: Sacrebleu
type: sacrebleu
value: 0.9057
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jwt300_mt-Italian-to-Spanish_transformers
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the new_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4425
- Sacrebleu: 0.9057
- Gen Len: 18.1276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 2.7545 | 1.0 | 2229 | 2.4425 | 0.9057 | 18.1276 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio | 446abf34a0e511d0f9fc8ad85c1502574c0ae59a | 2022-03-30T03:35:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | scasutt | null | scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio | 2 | null | transformers | 25,328 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6445
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 |
| 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 |
| 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 |
| 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 |
| 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 |
| 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 |
| 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 |
| 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 |
| 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 |
| 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 |
| 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 |
| 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 |
| 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 |
| 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 |
| 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 |
| 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 |
| 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 |
| 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 |
| 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa | d27d4b0b4adcfa9d8e1f47bbe6e690f7a35f342b | 2022-03-29T12:02:46.000Z | [
"pytorch",
"bert",
"pretraining",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2111.05754",
"transformers",
"fill-mask"
] | fill-mask | false | Intel | null | Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa | 2 | null | transformers | 25,329 | ---
language: en
tags: fill-mask
datasets:
- wikipedia
- bookcorpus
---
# 80% 1x4 Block Sparse BERT-Base (uncased) Prune OFA
This model is was created using Prune OFA method described in [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
gabitoo1234/autotrain-mut_all_text-680820343 | 3f55b781642e6f5e5149ae409d3a41e58a556504 | 2022-03-29T16:09:31.000Z | [
"pytorch",
"bert",
"text-classification",
"es",
"dataset:gabitoo1234/autotrain-data-mut_all_text",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | gabitoo1234 | null | gabitoo1234/autotrain-mut_all_text-680820343 | 2 | null | transformers | 25,330 | ---
tags: autotrain
language: es
widget:
- text: "I love AutoTrain 🤗"
datasets:
- gabitoo1234/autotrain-data-mut_all_text
co2_eq_emissions: 115.48848403681228
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 680820343
- CO2 Emissions (in grams): 115.48848403681228
## Validation Metrics
- Loss: 0.3041240870952606
- Accuracy: 0.9462770369425126
- Macro F1: 0.7836898686625933
- Micro F1: 0.9462770369425126
- Weighted F1: 0.9449148298990091
- Macro Precision: 0.8344505891491089
- Micro Precision: 0.9462770369425126
- Weighted Precision: 0.9451247372908952
- Macro Recall: 0.7568785255994025
- Micro Recall: 0.9462770369425126
- Weighted Recall: 0.9462770369425126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabitoo1234/autotrain-mut_all_text-680820343
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
DrishtiSharma/poem-gen-spanish-t5-small-v5 | 6127d13d62b10554aa2e069a1cf5178ef0280bac | 2022-03-29T23:25:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-v5 | 2 | null | transformers | 25,331 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v5
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000125
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.9366 | 0.73 | 30000 | 2.9656 |
| 2.7518 | 1.46 | 60000 | 2.9120 |
| 2.6018 | 2.19 | 90000 | 2.8870 |
| 2.5262 | 2.93 | 120000 | 2.8646 |
| 2.3886 | 3.66 | 150000 | 2.8816 |
| 2.2758 | 4.39 | 180000 | 2.8900 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/poem-gen-spanish-t5-small-v7 | 8972897c8bd7a141d140febaf4076d77d79544ee | 2022-03-30T00:34:41.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-v7 | 2 | null | transformers | 25,332 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v7
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.1716 | 0.73 | 30000 | 3.1114 |
| 2.9666 | 1.46 | 60000 | 3.0271 |
| 2.8292 | 2.19 | 90000 | 2.9531 |
| 2.7264 | 2.93 | 120000 | 2.9126 |
| 2.6057 | 3.66 | 150000 | 2.9175 |
| 2.4876 | 4.39 | 180000 | 2.9077 |
| 2.3791 | 5.12 | 210000 | 2.9240 |
| 2.3515 | 5.85 | 240000 | 2.9169 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/PointsToSentence | ae6ce929d849e2568fb846f5df59472206e2b44b | 2022-03-29T23:11:32.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/PointsToSentence | 2 | null | transformers | 25,333 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence. |
negfir/bert_uncased_L-8_H-512_A-8 | 45713c3e352a35584fb5828f2ffd7bdc628bfaa3 | 2022-04-06T01:40:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-8_H-512_A-8 | 2 | null | transformers | 25,334 | Entry not found |
CenIA/albert-tiny-spanish-finetuned-qa-tar | 5d22d0c0c11de46fbe8dbc1d86a1926f8e07c2de | 2022-03-30T00:28:43.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-tiny-spanish-finetuned-qa-tar | 2 | null | transformers | 25,335 | Entry not found |
negfir/bert_uncased_L-4_H-512_A-8 | 18ab1026d3bce2aaa01182c5ce2b0f82bde15b53 | 2022-04-06T04:05:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-4_H-512_A-8 | 2 | null | transformers | 25,336 | Entry not found |
BigSalmon/InformalToFormalLincoln33 | dbd58c6b280b77634702222b3b3cdc7aff436262 | 2022-03-30T01:24:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln33 | 2 | null | transformers | 25,337 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln33")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln33")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
negfir/bert_uncased_L-2_H-768_A-12 | 48880f926ae1b632f3a471871d6168ed22303d5e | 2022-04-06T04:41:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-2_H-768_A-12 | 2 | null | transformers | 25,338 | Entry not found |
kijun/mas-kobart-v1 | e12da1f8f35fa3340f33091d6cbfb03ff4624639 | 2022-05-17T06:41:05.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
] | feature-extraction | false | kijun | null | kijun/mas-kobart-v1 | 2 | null | transformers | 25,339 | Entry not found |
Pavithra/codeparrot-ds-sample-gpt-small-neo | f1dad3948130d37a281da3074f3277a04f00a954 | 2022-04-05T20:04:16.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | Pavithra | null | Pavithra/codeparrot-ds-sample-gpt-small-neo | 2 | null | transformers | 25,340 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-gpt-small-neo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-neo
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.11.6
|
saaduddin/xlnet-nano-news | 45a8aa96c377ea57e96316fdeeffc91717b8faa9 | 2022-03-30T07:13:02.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | saaduddin | null | saaduddin/xlnet-nano-news | 2 | null | transformers | 25,341 | ---
license: mit
---
|
yinde/dummy-model | 75d80921e091a48f5e347156264eb0b899f8fd11 | 2022-03-30T11:59:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | yinde | null | yinde/dummy-model | 2 | null | transformers | 25,342 | Fake news classifier
This model trains a text classification model to detect fake news articles,
it uses distilbert-base-uncased-finetuned-sst-2-english pretrained model to work on
fake and real news dataset from kaggle (https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset) |
SAGAR4REAL/wav2vec2hindiasr | 872780dc341bdb6527fc62bf7eb4d091afabbae4 | 2022-03-30T17:32:46.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | SAGAR4REAL | null | SAGAR4REAL/wav2vec2hindiasr | 2 | 1 | transformers | 25,343 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2hindiasr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2hindiasr
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
negfir/bert_uncased_L-10_H-128_A-2 | 30a793f713d035923638421f7b03c048cfe8a3c5 | 2022-04-06T00:31:36.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | negfir | null | negfir/bert_uncased_L-10_H-128_A-2 | 2 | null | transformers | 25,344 | Entry not found |
imanueldrexel/fake-news-classifier | ef90306ad372c818cf1cc84a0c476b646ec1f36f | 2022-04-05T00:52:21.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | imanueldrexel | null | imanueldrexel/fake-news-classifier | 2 | null | transformers | 25,345 | fake-news-classifier |
nikhil6041/wav2vec2-commonvoice-tamil | 6aa8b2d84a8650b868aab329db047114fd84211a | 2022-03-31T09:24:01.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-commonvoice-tamil | 2 | null | transformers | 25,346 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-commonvoice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-commonvoice-tamil
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3415
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.384 | 1.69 | 200 | 3.3400 | 1.0 |
| 3.3085 | 3.39 | 400 | 3.3609 | 1.0 |
| 3.3008 | 5.08 | 600 | 3.3331 | 1.0 |
| 3.2852 | 6.78 | 800 | 3.3492 | 1.0 |
| 3.2908 | 8.47 | 1000 | 3.3318 | 1.0 |
| 3.2865 | 10.17 | 1200 | 3.3501 | 1.0 |
| 3.2826 | 11.86 | 1400 | 3.3403 | 1.0 |
| 3.2875 | 13.56 | 1600 | 3.3335 | 1.0 |
| 3.2899 | 15.25 | 1800 | 3.3311 | 1.0 |
| 3.2755 | 16.95 | 2000 | 3.3617 | 1.0 |
| 3.2877 | 18.64 | 2200 | 3.3317 | 1.0 |
| 3.2854 | 20.34 | 2400 | 3.3560 | 1.0 |
| 3.2878 | 22.03 | 2600 | 3.3332 | 1.0 |
| 3.2766 | 23.73 | 2800 | 3.3317 | 1.0 |
| 3.2943 | 25.42 | 3000 | 3.3737 | 1.0 |
| 3.2845 | 27.12 | 3200 | 3.3347 | 1.0 |
| 3.2765 | 28.81 | 3400 | 3.3415 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
unjustify/autotrain-commonsence-689620825 | 33cde22adfadcc5737d42f15e95601cdd1f2ce50 | 2022-03-31T06:38:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:unjustify/autotrain-data-commonsence",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | unjustify | null | unjustify/autotrain-commonsence-689620825 | 2 | null | transformers | 25,347 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-commonsence
co2_eq_emissions: 20.656741915705204
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 689620825
- CO2 Emissions (in grams): 20.656741915705204
## Validation Metrics
- Loss: 0.7315372824668884
- Accuracy: 0.6354949675117849
- Precision: 0.63792194092827
- Recall: 0.6191451241361658
- AUC: 0.6912165223485615
- F1: 0.6283932978308872
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/unjustify/autotrain-commonsence-689620825
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
AnonymousSub/news_fpdm_triplet_models_roberta | 87414d6e30590fa1b4d6638e0f0f8d9327b5c371 | 2022-03-31T08:31:29.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_triplet_models_roberta | 2 | null | transformers | 25,348 | Entry not found |
AnonymousSub/news_fpdm_models_roberta | 2f14265879213aa683536267f4e7af272c022355 | 2022-03-31T08:32:19.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_models_roberta | 2 | null | transformers | 25,349 | Entry not found |
AnonymousSub/news_fpdm_triplet_models_bert | 5151ba2862387c1471b6887c254ae77bac29d87c | 2022-03-31T08:33:16.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_triplet_models_bert | 2 | null | transformers | 25,350 | Entry not found |
Neulvo/bert-finetuned-squad | 4f813a1942d0063b82f266f5362162ece1e03472 | 2022-03-31T12:08:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Neulvo | null | Neulvo/bert-finetuned-squad | 2 | null | transformers | 25,351 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
benwoodyear/t5-base-cryptic-crosswords | 9b5ead278bdb0ad0866e368fdd892baf1b0c9ecb | 2022-03-31T21:11:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | benwoodyear | null | benwoodyear/t5-base-cryptic-crosswords | 2 | null | transformers | 25,352 | ---
license: afl-3.0
---
|
emreguleryuz/models | fcb5b59d6b7a853810a56b266b99f9b0346d0d71 | 2022-04-22T13:15:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | emreguleryuz | null | emreguleryuz/models | 2 | null | transformers | 25,353 | Entry not found |
Yaxin/xlm-roberta-base-amazon-en-es-fr-mlm | 16830bced524c801c7cb6c5642511c3824fd7961 | 2022-04-01T05:28:33.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"dataset:Yaxin/amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Yaxin | null | Yaxin/xlm-roberta-base-amazon-en-es-fr-mlm | 2 | null | transformers | 25,354 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- Yaxin/amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-amazon-en-es-fr-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Yaxin/amazon_reviews_multi
type: Yaxin/amazon_reviews_multi
metrics:
- name: Accuracy
type: accuracy
value: 0.6951035447140035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-amazon-en-es-fr-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Yaxin/amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3936
- Accuracy: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/news_fpdm_hier_models_roberta | 39c22dc2daff2d3a98098901f27702df0c8a5e10 | 2022-03-31T17:10:49.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_hier_models_roberta | 2 | null | transformers | 25,355 | Entry not found |
AnonymousSub/news_fpdm_hier_models_bert | 90fcdc2b74f8d08d5e8a6b5b755f0b9054082b3b | 2022-03-31T17:11:43.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/news_fpdm_hier_models_bert | 2 | null | transformers | 25,356 | Entry not found |
benwoodyear/byt5-base-cryptic-crosswords | be3c39cfac8c6efa1a0f2801c9dcf968f3c5c45f | 2022-03-31T22:03:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | benwoodyear | null | benwoodyear/byt5-base-cryptic-crosswords | 2 | null | transformers | 25,357 | Entry not found |
AAAA-4/DialoGPT-small-player_03 | 1281d567818733b0c684e7142bc00302620bdad2 | 2022-04-02T06:43:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AAAA-4 | null | AAAA-4/DialoGPT-small-player_03 | 2 | null | transformers | 25,358 | ---
tags:
- conversational
---
# Run 3 :)
# An exceedingly special thanks to Lynn Zheng for the tutorial on how to do this. |
joniponi/multilabel_inpatient_comments_4labels | 6994676a75d85923c5a7c4c25d33c527f8fc8577 | 2022-03-31T22:50:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments_4labels | 2 | null | transformers | 25,359 | Entry not found |
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-10 | 3216942fb3c2e495aea19b7bf9d56eb2fbed6d58 | 2022-04-01T06:04:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-rte-wnli-10 | 2 | null | transformers | 25,360 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-rte-wnli-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-rte-wnli-10
This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5876
- Accuracy: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0641 | 1.0 | 16558 | 0.4528 | 0.9138 |
| 0.0479 | 2.0 | 33116 | 0.5116 | 0.9153 |
| 0.0363 | 3.0 | 49674 | 0.5660 | 0.9138 |
| 0.0244 | 4.0 | 66232 | 0.5876 | 0.9206 |
| 0.0145 | 5.0 | 82790 | 0.6156 | 0.9192 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
CenIA/albert-base-spanish-finetuned-qa-tar | 6939b096a06d5380eab03f840a99847255150ed7 | 2022-04-01T14:53:47.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-base-spanish-finetuned-qa-tar | 2 | null | transformers | 25,361 | Entry not found |
joniponi/discharge-classifier | 8f8b1e75878f9cc6431b9aeeeca79d26efacde5c | 2022-04-01T06:33:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | joniponi | null | joniponi/discharge-classifier | 2 | null | transformers | 25,362 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: discharge-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discharge-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2473
- Accuracy: 0.9172
- F1: 0.9169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5607 | 1.0 | 40 | 0.4780 | 0.7643 | 0.7654 |
| 0.3673 | 2.0 | 80 | 0.2975 | 0.8854 | 0.8849 |
| 0.2424 | 3.0 | 120 | 0.2473 | 0.9172 | 0.9169 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
z5ying/distilgpt2-finetuned-wikitext2 | 5959cee10edafd5b42bbd098f822408267f79f10 | 2022-04-01T10:47:57.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | z5ying | null | z5ying/distilgpt2-finetuned-wikitext2 | 2 | null | transformers | 25,363 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [z5ying/distilgpt2-finetuned-wikitext2](https://huggingface.co/z5ying/distilgpt2-finetuned-wikitext2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 118 | 3.0306 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
Francesco/regnet-y-10b-seer | 6716bc3a677fee0c9d0da160d59e3785f4bde858 | 2022-04-01T09:23:32.000Z | [
"pytorch",
"regnet",
"feature-extraction",
"transformers"
] | feature-extraction | false | Francesco | null | Francesco/regnet-y-10b-seer | 2 | null | transformers | 25,364 | Entry not found |
adderplus/separations_for_collab-cryptic-crosswords | 72d2ea1dae68f1edc72685754540f060fdfb25da | 2022-04-01T09:30:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | adderplus | null | adderplus/separations_for_collab-cryptic-crosswords | 2 | null | transformers | 25,365 | Entry not found |
jfealko/wav2vec2-large-xls-r-300m-irish-colab_test | e40276b59d530376d0a51b8977e14f79412eaea7 | 2022-04-01T13:23:06.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jfealko | null | jfealko/wav2vec2-large-xls-r-300m-irish-colab_test | 2 | null | transformers | 25,366 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-irish-colab_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-irish-colab_test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7839
- Wer: 0.6220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 90
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.0428 | 2.94 | 50 | 4.1311 | 1.0 |
| 3.2917 | 5.88 | 100 | 3.1468 | 1.0 |
| 3.0221 | 8.82 | 150 | 2.9848 | 1.0 |
| 2.9795 | 11.76 | 200 | 2.9567 | 1.0 |
| 2.9379 | 14.71 | 250 | 2.9463 | 1.0 |
| 2.9068 | 17.65 | 300 | 2.8330 | 1.0 |
| 2.5088 | 20.59 | 350 | 1.9807 | 0.9535 |
| 1.6188 | 23.53 | 400 | 1.4254 | 0.8398 |
| 1.0435 | 26.47 | 450 | 1.3668 | 0.7807 |
| 0.7212 | 29.41 | 500 | 1.3914 | 0.7476 |
| 0.5456 | 32.35 | 550 | 1.5495 | 0.7470 |
| 0.4297 | 35.29 | 600 | 1.4751 | 0.6960 |
| 0.3533 | 38.24 | 650 | 1.5157 | 0.6909 |
| 0.2899 | 41.18 | 700 | 1.5394 | 0.6879 |
| 0.2529 | 44.12 | 750 | 1.6186 | 0.6903 |
| 0.2413 | 47.06 | 800 | 1.6386 | 0.6954 |
| 0.2113 | 50.0 | 850 | 1.6906 | 0.6778 |
| 0.1769 | 52.94 | 900 | 1.6918 | 0.6575 |
| 0.1622 | 55.88 | 950 | 1.7313 | 0.6572 |
| 0.1564 | 58.82 | 1000 | 1.7701 | 0.6510 |
| 0.1637 | 61.76 | 1050 | 1.6800 | 0.6444 |
| 0.148 | 64.71 | 1100 | 1.7306 | 0.6477 |
| 0.1385 | 67.65 | 1150 | 1.7605 | 0.6408 |
| 0.1264 | 70.59 | 1200 | 1.7534 | 0.6244 |
| 0.1157 | 73.53 | 1250 | 1.7906 | 0.6381 |
| 0.1027 | 76.47 | 1300 | 1.7803 | 0.6265 |
| 0.1061 | 79.41 | 1350 | 1.7617 | 0.6259 |
| 0.0934 | 82.35 | 1400 | 1.7649 | 0.6253 |
| 0.0904 | 85.29 | 1450 | 1.7713 | 0.6187 |
| 0.0911 | 88.24 | 1500 | 1.7839 | 0.6220 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
fmeng/passage_en_selection | 196a22b281e2b0670367aaf75e6050c061f8104c | 2022-04-01T12:43:58.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | fmeng | null | fmeng/passage_en_selection | 2 | null | transformers | 25,367 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-qa-tar | 8419c6b89c52441ca3c1717dbbcb47ee9d5efe7e | 2022-04-01T21:02:59.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-qa-tar | 2 | null | transformers | 25,368 | Entry not found |
danringwald/acoustic | 2b55234cf9894c38ebf384e68298f4f61999eb28 | 2022-04-01T15:53:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | danringwald | null | danringwald/acoustic | 2 | null | transformers | 25,369 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-qa-tar | 8407c114e3c96c548039245319c9b0c54e9f9948 | 2022-04-05T17:20:33.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-qa-tar | 2 | null | transformers | 25,370 | Entry not found |
DrishtiSharma/poem-gen-spanish-t5-small-d2 | 92881c58e2e60fb17bc1f7a83771c48c950eae8a | 2022-04-01T22:38:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-d2 | 2 | null | transformers | 25,371 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-d2
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.223 | 0.73 | 30000 | 3.1479 |
| 3.0109 | 1.46 | 60000 | 3.0544 |
| 2.8649 | 2.19 | 90000 | 2.9730 |
| 2.7603 | 2.93 | 120000 | 2.9301 |
| 2.6343 | 3.66 | 150000 | 2.9188 |
| 2.5094 | 4.39 | 180000 | 2.9064 |
| 2.391 | 5.12 | 210000 | 2.9073 |
| 2.3592 | 5.85 | 240000 | 2.9022 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/poem-gen-spanish-t5-small-d3 | 73567d03998bdbf67ac064af2ffe757304e921c4 | 2022-04-02T11:12:23.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-d3 | 2 | null | transformers | 25,372 | Entry not found |
DrishtiSharma/poem-gen-spanish-t5-small-d5 | 0c2f2d4de658f67b17d6c3e68e09ebc635c49aa2 | 2022-04-02T11:12:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-spanish-t5-small-d5 | 2 | null | transformers | 25,373 | Entry not found |
Chikashi/t5-small-finetuned-wikihow_3epoch | 52128bac11fb8ff5ba6d55f846887f754830c8de | 2022-04-02T07:42:15.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wikihow",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chikashi | null | Chikashi/t5-small-finetuned-wikihow_3epoch | 2 | null | transformers | 25,374 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 25.5784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5163
- Rouge1: 25.5784
- Rouge2: 8.9929
- Rougel: 21.5345
- Rougelsum: 24.9382
- Gen Len: 18.384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9421 | 0.25 | 5000 | 2.6545 | 23.2336 | 7.5502 | 19.5899 | 22.5521 | 18.4076 |
| 2.8411 | 0.51 | 10000 | 2.6103 | 24.3524 | 8.2068 | 20.5238 | 23.6679 | 18.2606 |
| 2.7983 | 0.76 | 15000 | 2.5836 | 24.8169 | 8.4826 | 20.8765 | 24.1686 | 18.3211 |
| 2.7743 | 1.02 | 20000 | 2.5627 | 24.9904 | 8.5625 | 21.0344 | 24.3416 | 18.3786 |
| 2.7452 | 1.27 | 25000 | 2.5508 | 25.1497 | 8.6872 | 21.152 | 24.4751 | 18.3524 |
| 2.7353 | 1.53 | 30000 | 2.5384 | 25.2909 | 8.7408 | 21.2344 | 24.629 | 18.4453 |
| 2.7261 | 1.78 | 35000 | 2.5322 | 25.3748 | 8.7802 | 21.312 | 24.7191 | 18.3754 |
| 2.7266 | 2.03 | 40000 | 2.5265 | 25.4095 | 8.8915 | 21.3871 | 24.7685 | 18.4013 |
| 2.706 | 2.29 | 45000 | 2.5211 | 25.4372 | 8.8926 | 21.4124 | 24.7902 | 18.3776 |
| 2.7073 | 2.54 | 50000 | 2.5176 | 25.4925 | 8.9668 | 21.5103 | 24.8608 | 18.4303 |
| 2.703 | 2.8 | 55000 | 2.5163 | 25.5784 | 8.9929 | 21.5345 | 24.9382 | 18.384 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnonymousSub/fpdm_bert_FT_newsqa | ac5d3eef1167d7afbb0ad7f854d40187c0f800c7 | 2022-04-01T21:50:04.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_bert_FT_newsqa | 2 | null | transformers | 25,375 | Entry not found |
AnonymousSub/news_pretrain_roberta_FT_newsqa | c086201fcc2e526f9b69e2e72d0bd255d07bd91c | 2022-04-01T21:52:56.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/news_pretrain_roberta_FT_newsqa | 2 | null | transformers | 25,376 | Entry not found |
AnonymousSub/fpdm_hier_bert_FT_newsqa | c32768239fa5f823ee96856c7f1e02bf0ab9616a | 2022-04-01T21:55:46.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/fpdm_hier_bert_FT_newsqa | 2 | null | transformers | 25,377 | Entry not found |
junnyu/flash_small_wwm_cluecorpussmall | 86ead2321140d0fd575d2a42a35ba688346be01c | 2022-04-02T09:46:27.000Z | [
"pytorch",
"flash",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/flash_small_wwm_cluecorpussmall | 2 | null | transformers | 25,378 | ---
license: mit
inference: False
---
# training logs
- https://wandb.ai/junyu/huggingface/runs/1jg2jlgt
# install
- https://github.com/JunnYu/FLASHQuad_pytorch
# usage
```python
import torch
from flash import FLASHForMaskedLM
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall")
model = FLASHForMaskedLM.from_pretrained("junnyu/flash_small_wwm_cluecorpussmall")
model.eval()
text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!"
inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False) #这里必须是512,不然结果可能不对。
with torch.no_grad():
pt_outputs = model(**inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
val,idx = pt_outputs[i].softmax(-1).topk(k=5)
tokens = tokenizer.convert_ids_to_tokens(idx)
new_tokens = []
for v,t in zip(val.cpu(),tokens):
new_tokens.append(f"{t}+{round(v.item(),4)}")
pt_outputs_sentence += "[" + "||".join(new_tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 天气预报说今天的天[气+0.994||天+0.0015||空+0.0014||晴+0.0005||阳+0.0003]很好,那么我[们+0.9563||就+0.0381||也+0.0032||俩+0.0004||来+0.0002]一起去公园玩吧!
``` |
nikhil6041/wav2vec2-large-xls-r-300m-hindi-colab | 4e8d3ac5a6e86ab87ef2a6ebd80cfe152bef1897 | 2022-04-02T06:04:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-large-xls-r-300m-hindi-colab | 2 | null | transformers | 25,379 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
202015004/Teacher_model_2_april_epoch30 | c111a736b3469529ceefbf0b343c3f9a6698a132 | 2022-04-02T15:18:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | 202015004 | null | 202015004/Teacher_model_2_april_epoch30 | 2 | null | transformers | 25,380 | Entry not found |
vicl/distilbert-base-uncased-finetuned-mrpc | 50125c8e632d1122530579f420eca79b62b69461 | 2022-04-02T21:56:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | vicl | null | vicl/distilbert-base-uncased-finetuned-mrpc | 2 | null | transformers | 25,381 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8480392156862745
- name: F1
type: f1
value: 0.89419795221843
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4044
- Accuracy: 0.8480
- F1: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.3830 | 0.8162 | 0.8673 |
| No log | 2.0 | 460 | 0.3957 | 0.8456 | 0.8952 |
| 0.4307 | 3.0 | 690 | 0.4044 | 0.8480 | 0.8942 |
| 0.4307 | 4.0 | 920 | 0.5649 | 0.8407 | 0.8915 |
| 0.1739 | 5.0 | 1150 | 0.5983 | 0.8480 | 0.8956 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jjezabek/bert-base-uncased-imdb-all-pert | 05857dac55ea27c9e558777c3be3f026f761c8a6 | 2022-04-03T04:58:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | jjezabek | null | jjezabek/bert-base-uncased-imdb-all-pert | 2 | null | transformers | 25,382 | ---
license: mit
---
|
aypan17/distilgpt2-imdb-pos | a094d251ece2f1d4bead2b06a75bdff995eb1bcb | 2022-04-03T06:15:02.000Z | [
"pytorch",
"gpt2",
"transformers",
"license:ms-pl"
] | null | false | aypan17 | null | aypan17/distilgpt2-imdb-pos | 2 | null | transformers | 25,383 | ---
license: ms-pl
---
|
munozariasjm/writter_distilgpt_hep | e9cfe351657dd632194e0aa878d7ce6bea2273bd | 2022-04-20T11:20:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | munozariasjm | null | munozariasjm/writter_distilgpt_hep | 2 | null | transformers | 25,384 | Entry not found |
AnonymousSub/bert_FT_new_newsqa | 1e689fc7bae42a4efccecc985a4358acb41e551c | 2022-04-03T11:34:15.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/bert_FT_new_newsqa | 2 | null | transformers | 25,385 | Entry not found |
BigSalmon/InformalToFormalLincoln35 | a1dbebe17cfb876ab8bd17de2a4d0b4a206313ea | 2022-04-17T17:44:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln35 | 2 | null | transformers | 25,386 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal | a9d740335d9e74579d6ce9ef0e2a4601109d736e | 2022-04-10T20:04:26.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal | 2 | null | transformers | 25,387 | It works worse than the GPT-2 Large & Medium models I have been training, because I don't have the compute needed to train the entire dataset I have. I had to resort to using bits.
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BPointsLincolnFormalInformal")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
Points and keywords. Informal to formal. |
microsoft/cvt-13-384-22k | 92fbfe3932e45474055beb1a180ee23c68ee5626 | 2022-05-18T16:18:02.000Z | [
"pytorch",
"cvt",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.15808",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/cvt-13-384-22k | 2 | null | transformers | 25,388 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Convolutional Vision Transformer (CvT)
CvT-13 model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT).
Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-13-384-22k')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-13-384-22k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
frahman/bert-base-uncased-issues-128 | 1a1905245c657425bfc85eacd2b361f98eacf205 | 2022-04-04T15:11:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | frahman | null | frahman/bert-base-uncased-issues-128 | 2 | null | transformers | 25,389 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0984 | 1.0 | 291 | 1.7081 |
| 1.6512 | 2.0 | 582 | 1.4289 |
| 1.4854 | 3.0 | 873 | 1.3845 |
| 1.3924 | 4.0 | 1164 | 1.3844 |
| 1.3375 | 5.0 | 1455 | 1.1944 |
| 1.2969 | 6.0 | 1746 | 1.2848 |
| 1.2443 | 7.0 | 2037 | 1.2678 |
| 1.1998 | 8.0 | 2328 | 1.2151 |
| 1.1805 | 9.0 | 2619 | 1.1638 |
| 1.1396 | 10.0 | 2910 | 1.2131 |
| 1.1333 | 11.0 | 3201 | 1.1966 |
| 1.0974 | 12.0 | 3492 | 1.1687 |
| 1.0822 | 13.0 | 3783 | 1.2283 |
| 1.0736 | 14.0 | 4074 | 1.1640 |
| 1.0595 | 15.0 | 4365 | 1.1207 |
| 1.0515 | 16.0 | 4656 | 1.2551 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mdrame/fatima_fellowship_roberta_small | ea96f82efe5e7ee230b388ee38992d488b83272b | 2022-04-04T14:38:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | mdrame | null | mdrame/fatima_fellowship_roberta_small | 2 | null | transformers | 25,390 | Entry not found |
nepp1d0/ProtBert-finetuned-proteinBindingDB | 61a54115ff535072f39b55e6ed4e1963c47a4904 | 2022-05-08T22:24:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | nepp1d0 | null | nepp1d0/ProtBert-finetuned-proteinBindingDB | 2 | null | transformers | 25,391 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ProtBert-finetuned-proteinBindingDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ProtBert-finetuned-proteinBindingDB
This model is a fine-tuned version of [Rostlab/prot_bert](https://huggingface.co/Rostlab/prot_bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5764
- Accuracy: 0.885
- F1: 0.8459
- Precision: 0.8255
- Recall: 0.885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8056 | 1.0 | 5000 | 1.5153 | 0.745 | 0.6391 | 0.5606 | 0.745 |
| 0.7873 | 2.0 | 10000 | 0.5976 | 0.865 | 0.8267 | 0.8063 | 0.865 |
| 0.7427 | 3.0 | 15000 | 0.6316 | 0.875 | 0.8364 | 0.8176 | 0.875 |
| 1.0022 | 4.0 | 20000 | 0.6766 | 0.85 | 0.8112 | 0.7951 | 0.85 |
| 0.7379 | 5.0 | 25000 | 0.6181 | 0.865 | 0.8267 | 0.8063 | 0.865 |
| 0.6987 | 6.0 | 30000 | 0.7094 | 0.87 | 0.8336 | 0.82 | 0.87 |
| 0.6984 | 7.0 | 35000 | 0.5377 | 0.885 | 0.8471 | 0.8290 | 0.885 |
| 0.6657 | 8.0 | 40000 | 0.6278 | 0.875 | 0.8373 | 0.8213 | 0.875 |
| 0.6695 | 9.0 | 45000 | 0.6323 | 0.88 | 0.8421 | 0.8240 | 0.88 |
| 0.6352 | 10.0 | 50000 | 0.5764 | 0.885 | 0.8459 | 0.8255 | 0.885 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/MediumInformalToFormalLincoln | 65ed28efec7bbff6b210c3847a211e209e68de89 | 2022-04-04T22:25:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MediumInformalToFormalLincoln | 2 | null | transformers | 25,392 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
Danastos/nq_bert_el | 4e41b52b649d5dce06decb985d2a47d932e38fba | 2022-04-05T03:24:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:Danastos/nq_el_custom",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/nq_bert_el | 2 | null | transformers | 25,393 | ---
tags:
- generated_from_trainer
datasets:
- Danastos/nq_el_custom
model-index:
- name: nq_bert_el
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nq_bert_el
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on the Danastos/nq_el_custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
creynier/wav2vec2-base-swbd-turn-eos-full | 05ea35c0730bb8022f57daad7cbeee3cb6775f16 | 2022-04-10T20:45:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | creynier | null | creynier/wav2vec2-base-swbd-turn-eos-full | 2 | null | transformers | 25,394 | Entry not found |
ZZ99/NBME_TAPT_deberta_base | 35542bd127b5ee9435915f86d6d356fbfde49390 | 2022-04-05T01:16:58.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | ZZ99 | null | ZZ99/NBME_TAPT_deberta_base | 2 | null | transformers | 25,395 | ---
license: afl-3.0
---
|
Bistolero/EXP_TWO_EP | c55f284eeb7d528f90eb3a4236e1e055b44b5f02 | 2022-04-04T23:34:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/EXP_TWO_EP | 2 | null | transformers | 25,396 | Entry not found |
mgreenbe/607-live-demo-yelp-polarity | c1b046732b80eed8a6cbc7cfd4da590c448d9d29 | 2022-04-05T00:30:47.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | mgreenbe | null | mgreenbe/607-live-demo-yelp-polarity | 2 | null | transformers | 25,397 | Demo model trained for 1 epoch on 4096 examples from the `yelp_polarity` dataset. |
huggingtweets/zei_squirrel | cf5769a7bb3bfab8ad855e94a9a7e3c0ea0b16e9 | 2022-04-05T00:41:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zei_squirrel | 2 | null | transformers | 25,398 | ---
language: en
thumbnail: http://www.huggingtweets.com/zei_squirrel/1649119290934/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/951980805542350848/Xx1LczLK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">☀️👀</div>
<div style="text-align: center; font-size: 14px;">@zei_squirrel</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ☀️👀.
| Data | ☀️👀 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 96 |
| Short tweets | 276 |
| Tweets kept | 2877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wdkqqknq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zei_squirrel's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rrz7w9d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rrz7w9d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zei_squirrel')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
luffycodes/reg-roberta-small-mrpc | f33735530b7d4369487ed5ad456e404baeb31e03 | 2022-04-05T03:47:52.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | luffycodes | null | luffycodes/reg-roberta-small-mrpc | 2 | null | transformers | 25,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.