modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
alipsezzar/DialoGPT-medium-harrypotter | a661dc6ff91ace040576199d23ca2ee66bf6ecbc | 2021-08-28T18:46:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | alipsezzar | null | alipsezzar/DialoGPT-medium-harrypotter | 6 | null | transformers | 15,100 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
alireza7/PEGASUS-persian-base-parsinlu-textual-entailment | 72f9dee144dff8b2d593a1ed20039e101c1524e2 | 2021-09-29T19:25:38.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-parsinlu-textual-entailment | 6 | null | transformers | 15,101 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
allenai/dsp_roberta_base_dapt_reviews_tapt_imdb_70000 | 11865e21518b1e7567c2276a8d388926fcb9435b | 2021-05-20T13:19:37.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_dapt_reviews_tapt_imdb_70000 | 6 | null | transformers | 15,102 | Entry not found |
allenai/dsp_roberta_base_tapt_hyperpartisan_news_5015 | 6c81ba558dd7c8f9421ca0bd89a50b3656cfc79c | 2021-05-20T13:26:31.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_hyperpartisan_news_5015 | 6 | null | transformers | 15,103 | Entry not found |
allenai/t5-small-squad2-next-word-generator-squad | 363feafd44f305bed9133e7b72994729a92c4c1d | 2021-06-23T11:15:36.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | allenai | null | allenai/t5-small-squad2-next-word-generator-squad | 6 | null | transformers | 15,104 | Next word generator trained on questions. Receives partial questions and tries to predict the next word.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-squad2-next-word-generator-squad"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("Which")
run_model("Which two")
run_model("Which two counties")
run_model("Which two counties are")
run_model("Which two counties are the")
run_model("Which two counties are the biggest")
run_model("Which two counties are the biggest economic")
run_model("Which two counties are the biggest economic powers")
```
which should result in the following:
```
['one']
['statements']
['are']
['in']
['most']
['in']
['zones']
['of']
```
|
amild01/GPT2-german-chefkoch | b190d55866cbfaadc7e0e11c70c5d2c61624bc30 | 2021-09-08T16:01:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | amild01 | null | amild01/GPT2-german-chefkoch | 6 | null | transformers | 15,105 | Entry not found |
amyma21/sincere_question_classification | 9d979a7e9720f8f54d8c8458f181521bb7efdcce | 2021-12-01T03:38:43.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | amyma21 | null | amyma21/sincere_question_classification | 6 | null | transformers | 15,106 | Entry not found |
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42 | 1b1b353119e9f21f75a4558f0d14c520e90ee990 | 2022-02-21T19:28:40.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42 | 6 | null | transformers | 15,107 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-42 | 2c280e181bd5df4ff4f398a56a3ea3b9fd5824fd | 2022-02-21T20:36:42.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-42 | 6 | null | transformers | 15,108 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-1024-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 66.90633869441817, 'f1': 77.54482247690522}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat | 0332187a5e80bc4db2de066ff926c81b79c062ef | 2021-10-04T14:52:03.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
]
| question-answering | false | andi611 | null | andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat | 6 | null | transformers | 15,109 | ---
language:
- en
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
- conll2003
model_index:
- name: bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: squad_v2
type: squad_v2
args: conll2003
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
andi611/roberta-base-ner-conll2003 | 854431601c22441ab430f343d2332fdb4513f281 | 2021-07-14T00:25:37.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | andi611 | null | andi611/roberta-base-ner-conll2003 | 6 | null | transformers | 15,110 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
model_index:
- name: roberta-base-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0814
- eval_precision: 0.9101
- eval_recall: 0.9336
- eval_f1: 0.9217
- eval_accuracy: 0.9799
- eval_runtime: 10.2964
- eval_samples_per_second: 315.646
- eval_steps_per_second: 39.529
- epoch: 1.14
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
angiquer/twitterko-cha-electra-base-discriminator | a8c4acb877c85775dfdb5c9edcd5d90f09db7d21 | 2020-07-07T04:33:22.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | angiquer | null | angiquer/twitterko-cha-electra-base-discriminator | 6 | null | transformers | 15,111 | Entry not found |
anhtunguyen98/xlm-base-vi | 1861bd2fb6d489d77fddc3a658589a58b4f05cbe | 2021-10-12T09:32:28.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | anhtunguyen98 | null | anhtunguyen98/xlm-base-vi | 6 | null | transformers | 15,112 | Entry not found |
anirudh21/bert-base-uncased-finetuned-cola | 27dd929ba48d873adccabddc821b6c5f6c85362a | 2022-01-24T16:29:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/bert-base-uncased-finetuned-cola | 6 | null | transformers | 15,113 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5796941781913538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9664
- Matthews Correlation: 0.5797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5017 | 1.0 | 535 | 0.5252 | 0.4841 |
| 0.2903 | 2.0 | 1070 | 0.5550 | 0.4967 |
| 0.1839 | 3.0 | 1605 | 0.7295 | 0.5634 |
| 0.1132 | 4.0 | 2140 | 0.7762 | 0.5702 |
| 0.08 | 5.0 | 2675 | 0.9664 | 0.5797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
anirudh21/bert-base-uncased-finetuned-rte | 8654d2325e2586959248b477e864338d6b079570 | 2022-01-27T06:57:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/bert-base-uncased-finetuned-rte | 6 | null | transformers | 15,114 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6642599277978339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-rte
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8075
- Accuracy: 0.6643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 0.6777 | 0.5668 |
| No log | 2.0 | 126 | 0.6723 | 0.6282 |
| No log | 3.0 | 189 | 0.7238 | 0.6318 |
| No log | 4.0 | 252 | 0.7993 | 0.6354 |
| No log | 5.0 | 315 | 0.8075 | 0.6643 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
anirudh21/xlnet-base-cased-finetuned-wnli | da0ad48d4a41120124e59c67decf37f65039fb5c | 2022-01-13T13:52:38.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/xlnet-base-cased-finetuned-wnli | 6 | null | transformers | 15,115 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlnet-base-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-wnli
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6874
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.7209 | 0.5352 |
| No log | 2.0 | 80 | 0.6874 | 0.5634 |
| No log | 3.0 | 120 | 0.6908 | 0.5634 |
| No log | 4.0 | 160 | 0.6987 | 0.4930 |
| No log | 5.0 | 200 | 0.6952 | 0.5634 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anurag0077/distilbert-base-uncased-finetuned-squad3 | 3ec1ef82b3a7f6d5e72f7d20c557aedab93d438c | 2021-11-07T15:22:01.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | anurag0077 | null | anurag0077/distilbert-base-uncased-finetuned-squad3 | 6 | null | transformers | 15,116 | Entry not found |
anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 | b08a8bf230c8da5952c917193a38add952fed530 | 2022-03-24T11:57:44.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 | 6 | null | transformers | 15,117 | ---
language:
- ur
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-ur-cv8
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: ur
metrics:
- type: wer
value: 42.376
name: Test WER
- name: Test CER
type: cer
value: 18.18
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1443
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.6269 | 15.98 | 400 | 3.3246 | 1.0 |
| 3.0546 | 31.98 | 800 | 2.8148 | 0.9963 |
| 1.4589 | 47.98 | 1200 | 1.0237 | 0.6584 |
| 1.0911 | 63.98 | 1600 | 0.9524 | 0.5966 |
| 0.8879 | 79.98 | 2000 | 0.9827 | 0.5822 |
| 0.7467 | 95.98 | 2400 | 0.9923 | 0.5840 |
| 0.6427 | 111.98 | 2800 | 0.9988 | 0.5714 |
| 0.5685 | 127.98 | 3200 | 1.0872 | 0.5807 |
| 0.5068 | 143.98 | 3600 | 1.1194 | 0.5822 |
| 0.463 | 159.98 | 4000 | 1.1138 | 0.5692 |
| 0.4212 | 175.98 | 4400 | 1.1232 | 0.5714 |
| 0.4056 | 191.98 | 4800 | 1.1443 | 0.5677 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ur-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "اب نے ٹ پیس ان لیتے ہیں"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.146 | 42.376 | |
anusha/t5-base-finetuned-wikiSQL-sql-to-en | f6537f8d2d143f263847d5c3bcb4c4d4c846cf95 | 2021-06-23T12:03:42.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | anusha | null | anusha/t5-base-finetuned-wikiSQL-sql-to-en | 6 | null | transformers | 15,118 | Entry not found |
aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616 | 8221e4cf888cbd9a2fb268fad323c564f503ea8b | 2021-05-18T23:48:58.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616 | 6 | null | transformers | 15,119 | # BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [2 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-2_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-2_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-2_H-512_A-8_cord19-200616
|
aodiniz/bert_uncased_L-2_H-512_A-8_squad2 | 05b7c951ad18d2293f16b3529986e361e2469786 | 2021-05-18T23:50:11.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-2_H-512_A-8_squad2 | 6 | null | transformers | 15,120 | Entry not found |
aodiniz/bert_uncased_L-4_H-512_A-8_squad2_covid-qna | bcad9d9c73311a578d2b0e723e936bde8397d3c8 | 2021-05-18T23:55:10.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aodiniz | null | aodiniz/bert_uncased_L-4_H-512_A-8_squad2_covid-qna | 6 | null | transformers | 15,121 | Entry not found |
arampacha/wav2vec2-large-xlsr-ukrainian | b36f5ea842f39e39e6e3be2208c4591aa68873c1 | 2021-07-05T22:02:32.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | arampacha | null | arampacha/wav2vec2-large-xlsr-ukrainian | 6 | 1 | transformers | 15,122 | ---
language: uk
dataset: common_voice
metrics: wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Ukrainian XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice uk
type: common_voice
args: uk
metrics:
- name: Test WER
type: wer
value: 29.89
---
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice) and sample of [M-AILABS Ukrainian Corpus](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays and normalize charecters
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(re.compile("['`]"), '’', batch['sentence'])
batch["sentence"] = re.sub(re.compile(chars_to_ignore_regex), '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('i'), 'і', batch['sentence'])
batch["sentence"] = re.sub(re.compile('o'), 'о', batch['sentence'])
batch["sentence"] = re.sub(re.compile('a'), 'а', batch['sentence'])
batch["sentence"] = re.sub(re.compile('ы'), 'и', batch['sentence'])
batch["sentence"] = re.sub(re.compile("–"), '', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.89
## Training
The Common Voice `train`, `validation` and the M-AILABS Ukrainian corpus.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon. |
ardauzunoglu/gp-classification | 2a955564ea944c6d7767a1c27c3825ba66440a01 | 2022-02-08T10:48:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | ardauzunoglu | null | ardauzunoglu/gp-classification | 6 | 1 | transformers | 15,123 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: gp-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gp-classification
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Accuracy: 0.9997
- F1: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0215 | 1.0 | 956 | 0.0051 | 0.9987 | 0.9987 |
| 0.0033 | 2.0 | 1912 | 0.0088 | 0.9984 | 0.9985 |
| 0.001 | 3.0 | 2868 | 0.0036 | 0.9995 | 0.9995 |
| 0.0005 | 4.0 | 3824 | 0.0012 | 0.9997 | 0.9997 |
| 0.0 | 5.0 | 4780 | 0.0013 | 0.9997 | 0.9997 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
arianpasquali/distilbert-base-uncased-finetuned-clinc | 8b44c293d2b0c45888b94eb481a79d3a789bde7f | 2022-01-31T20:09:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | arianpasquali | null | arianpasquali/distilbert-base-uncased-finetuned-clinc | 6 | null | transformers | 15,124 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9112903225806451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7751
- Accuracy: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.315 | 1.0 | 318 | 3.3087 | 0.74 |
| 2.6371 | 2.0 | 636 | 1.8833 | 0.8381 |
| 1.5388 | 3.0 | 954 | 1.1547 | 0.8929 |
| 1.0076 | 4.0 | 1272 | 0.8590 | 0.9071 |
| 0.79 | 5.0 | 1590 | 0.7751 | 0.9113 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
armheb/DNA_bert_4 | c8499f0744a3dc8ba47c44c0af8cbd7244597ce9 | 2021-10-10T22:35:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | armheb | null | armheb/DNA_bert_4 | 6 | null | transformers | 15,125 | Entry not found |
arnolfokam/mbert-base-uncased-kin | d0783a61fa6bdee932f06b41ef3278c276b77e6f | 2021-11-24T11:13:53.000Z | [
"pytorch",
"bert",
"token-classification",
"kin",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/mbert-base-uncased-kin | 6 | null | transformers | 15,126 | ---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi."
---
# Model description
**mbert-base-uncased-kin** is a model based on the fine-tuned multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-kin**| 81.35 | 83.98 | 82.64
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/mbert-base-uncased-ner-pcm | 2c279321f24da3c545b3da70c1e3cd3f6ddee372 | 2021-11-24T21:17:06.000Z | [
"pytorch",
"bert",
"token-classification",
"pcm",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/mbert-base-uncased-ner-pcm | 6 | null | transformers | 15,127 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---
# Model description
**mbert-base-uncased-ner-pcm** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-pcm**| 90.38 | 82.44 | 86.23
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/mbert-base-uncased-pcm | 45701e0db4e54cbe319e27e871c983ead29d9c2a | 2021-11-24T21:17:52.000Z | [
"pytorch",
"bert",
"token-classification",
"pcm",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/mbert-base-uncased-pcm | 6 | null | transformers | 15,128 | ---
language:
- pcm
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
---
# Model description
**mbert-base-uncased-pcm** is a model based on the fine-tuned Multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-pcm**| 90.46 | 83.23 | 86.69
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
``` |
arnolfokam/mbert-base-uncased-swa | 6b26bbbbd233c7f4870b1757602102a96c9bde96 | 2021-11-24T11:35:54.000Z | [
"pytorch",
"bert",
"token-classification",
"swa",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/mbert-base-uncased-swa | 6 | null | transformers | 15,129 | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**mbert-base-uncased-swa** is a model based on the fine-tuned Multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-swa**| 85.59 | 90.80 | 88.12
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
lmqg/t5-base-squad-no-answer | 1bc6a63a0adc403508f21417dc8eeb519c3bf796 | 2022-06-01T00:24:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | lmqg | null | lmqg/t5-base-squad-no-answer | 6 | null | transformers | 15,130 | Entry not found |
asanka25/xlm-roberta-base-finetuned-conll03-english-finetuned-sinhala | 084e5fe75a47d027f2f14429d4cfd06440d1abe0 | 2022-01-23T10:59:51.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | asanka25 | null | asanka25/xlm-roberta-base-finetuned-conll03-english-finetuned-sinhala | 6 | null | transformers | 15,131 | This model was created using xlm-roberta-base bodel and fine-tuned it using CoNLL 2003 dataset. On top of the trained model, we trained it again using a Sinhala NER data that was also formatted to the CoNLL format. |
aseda/t5-small-finetuned-xsum | ff3864f7d459e84eddf2ed81e9ace06ebf3c4ec7 | 2021-12-04T04:10:06.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | aseda | null | aseda/t5-small-finetuned-xsum | 6 | null | transformers | 15,132 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ashraq/dv-electra-small-news-classification | 8fa0af4bb46ff5eb035cf7e5655d8211bfeda13e | 2021-11-03T22:31:07.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | ashraq | null | ashraq/dv-electra-small-news-classification | 6 | null | transformers | 15,133 | ---
widget:
- text: 'ގޫގަލް ޕިކްސަލް 6 ގެ ކެމެރާ، އޭއައި ގެ ޖާދޫއިން ފުރިފައި'
---
# The [ELECTRA-small](https://huggingface.co/ashraq/dv-electra-small) fine-tuned for news classification in Dhivehi |
avneet/distilbert-base-uncased-finetuned-cola | 63e310a0eb16d2101f01d7d084edb7a5ea8f7017 | 2021-07-30T00:15:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | avneet | null | avneet/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,134 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model_index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metric:
name: Matthews Correlation
type: matthews_correlation
value: 0.42176824452830747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4981
- Matthews Correlation: 0.4218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5248 | 1.0 | 535 | 0.4981 | 0.4218 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
baffo32/gpt-j-6B-ptmap | dc46f904300794494bedc4abbbadfd0f94008eb9 | 2021-12-25T15:16:26.000Z | [
"pytorch",
"gptj",
"text-generation",
"en",
"dataset:The Pile",
"arxiv:2104.09864",
"arxiv:2101.00027",
"transformers",
"causal-lm",
"license:apache-2.0"
]
| text-generation | false | baffo32 | null | baffo32/gpt-j-6B-ptmap | 6 | null | transformers | 15,135 | ---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend. |
benjaminbeilharz/bert-base-uncased-next-turn-classifier | c63f0227ca981c07ce53b32368d259e0f96a8957 | 2022-02-22T17:23:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | benjaminbeilharz | null | benjaminbeilharz/bert-base-uncased-next-turn-classifier | 6 | null | transformers | 15,136 | Entry not found |
benjaminbeilharz/distilbert-dailydialog-turn-classifier | 0cf1579663312bd6cb08035ed2aef764463241e1 | 2022-01-22T19:16:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | benjaminbeilharz | null | benjaminbeilharz/distilbert-dailydialog-turn-classifier | 6 | null | transformers | 15,137 | Entry not found |
beomi/distilbert-base-uncased-finetuned-cola | c292e13f89d8d04d7bc5636afaa1874ae8a1e34f | 2021-10-18T11:22:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | beomi | null | beomi/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,138 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5552849676135797
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7525
- Matthews Correlation: 0.5553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.523 | 1.0 | 535 | 0.5024 | 0.4160 |
| 0.3437 | 2.0 | 1070 | 0.5450 | 0.4965 |
| 0.2326 | 3.0 | 1605 | 0.6305 | 0.5189 |
| 0.177 | 4.0 | 2140 | 0.7525 | 0.5553 |
| 0.1354 | 5.0 | 2675 | 0.8630 | 0.5291 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
berkergurcay/finetuned-bert-base-uncased | 15e9df80c49888acb145190da61af85546dac835 | 2021-05-26T13:33:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | berkergurcay | null | berkergurcay/finetuned-bert-base-uncased | 6 | null | transformers | 15,139 | Entry not found |
bestvater/distilbert-kav-stance | 0e0d5ef3a627f15c008463da084e180103eb629e | 2021-10-04T17:00:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | bestvater | null | bestvater/distilbert-kav-stance | 6 | null | transformers | 15,140 | Entry not found |
bigjoedata/rockbot-scratch | e4329e2aac67e457c4fbcd802a64f9514ea5658e | 2021-05-21T14:15:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | bigjoedata | null | bigjoedata/rockbot-scratch | 6 | null | transformers | 15,141 |
# 🎸 🥁 Rockbot 🎤 🎧
A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock).
**Instructions:** Type in a fake song title, pick an artist, click "Generate".
Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable.
Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot.
Just have fun.
[Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed
[Github](https://github.com/bigjoedata/rockbot)
[GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot)
[DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic.
🎹 🪘 🎷 🎺 🪗 🪕 🎻
## Background
With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate)
### Full Tech Stack
[Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.).
[Python](https://www.python.org/).
[Streamlit](https://www.streamlit.io/).
[GPT-2](https://openai.com/blog/better-language-models/).
[AITextGen](https://github.com/minimaxir/aitextgen).
[Pandas](https://pandas.pydata.org/).
[LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/).
[Google Colab](https://colab.research.google.com/) (GPU based Training).
[Knime](https://www.knime.com/) (data cleaning).
## How to Use The Model
Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation.
### Training Parameters Used
ai.train("lyrics.txt",
line_by_line=False,
from_cache=False,
num_steps=10000,
generate_every=2000,
save_every=2000,
save_gdrive=False,
learning_rate=1e-3,
batch_size=3,
eos_token="<|endoftext|>",
#fp16=True
)
### To Use
Generate With Prompt (Use Title Case):
Song Name
BY
Artist Name
|
bleachybrain/DialoGPT-med-ss | 14ba95f0ed7c65d4c9b7b1703011d46e90dd1900 | 2022-04-27T01:50:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | bleachybrain | null | bleachybrain/DialoGPT-med-ss | 6 | null | transformers | 15,142 | ---
tags:
- conversational
---
# ss |
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1 | f43a47a15787b55cc0c87cc9aba3efb19ea6e252 | 2021-09-15T08:14:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | blizrys | null | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1 | 6 | null | transformers | 15,143 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-1
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6660
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.8471 | 0.58 |
| No log | 2.0 | 114 | 0.8450 | 0.58 |
| No log | 3.0 | 171 | 0.7846 | 0.58 |
| No log | 4.0 | 228 | 0.8649 | 0.58 |
| No log | 5.0 | 285 | 0.7220 | 0.68 |
| No log | 6.0 | 342 | 0.7395 | 0.66 |
| No log | 7.0 | 399 | 0.7198 | 0.72 |
| No log | 8.0 | 456 | 0.6417 | 0.72 |
| 0.7082 | 9.0 | 513 | 0.6265 | 0.74 |
| 0.7082 | 10.0 | 570 | 0.6660 | 0.7 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 | ed3779c6f8b6efc5e314de974b0519be9cb548fd | 2021-09-17T10:08:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | blizrys | null | blizrys/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2 | 6 | null | transformers | 15,144 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- null
metrics:
- accuracy
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-finetuned-pubmedqa-2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 1.3510 | 0.54 |
| No log | 2.0 | 114 | 0.9606 | 0.54 |
| No log | 3.0 | 171 | 0.9693 | 0.54 |
| No log | 4.0 | 228 | 1.0445 | 0.54 |
| No log | 5.0 | 285 | 1.0005 | 0.54 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
blizrys/distilbert-base-uncased-finetuned-cola | 19b0cfacb2162df1a3218fdd9db40d0c579e9d75 | 2021-09-11T18:01:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | blizrys | null | blizrys/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,145 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5373623427702773
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6223
- Matthews Correlation: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5275 | 1.0 | 535 | 0.5456 | 0.3973 |
| 0.3481 | 2.0 | 1070 | 0.5401 | 0.5006 |
| 0.242 | 3.0 | 1605 | 0.6223 | 0.5374 |
| 0.1725 | 4.0 | 2140 | 0.7934 | 0.5229 |
| 0.1346 | 5.0 | 2675 | 0.8478 | 0.5367 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
bonebambi/DialoGPT-small-ThakirClone | 76969deb9413d257a8c066bf803d60537e3c8f77 | 2021-10-11T20:02:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | bonebambi | null | bonebambi/DialoGPT-small-ThakirClone | 6 | null | transformers | 15,146 | ---
tags:
- conversational
---
# Personal DialoGPT Model |
boronbrown48/topic_otherTopics_v1 | 8d09763993a9b96860a5dca0b45ca1920d642724 | 2021-11-24T17:20:55.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | false | boronbrown48 | null | boronbrown48/topic_otherTopics_v1 | 6 | null | transformers | 15,147 | Entry not found |
boronbrown48/wangchanberta-sentiment-504-v3 | 27e623cc2d364db2dd316bd8928caaa31ae9f20b | 2021-11-25T03:11:04.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | false | boronbrown48 | null | boronbrown48/wangchanberta-sentiment-504-v3 | 6 | null | transformers | 15,148 | Entry not found |
boychaboy/MNLI_bert-base-cased_4 | 191195bddf441464ef1cad8e38e2997815ddb105 | 2021-05-19T13:14:43.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_bert-base-cased_4 | 6 | null | transformers | 15,149 | Entry not found |
boychaboy/MNLI_bert-base-uncased_2 | a49c672b854b1cb37d48bd6ec652e1cd7ca2d200 | 2021-05-19T13:16:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_bert-base-uncased_2 | 6 | null | transformers | 15,150 | Entry not found |
boychaboy/SNLI_bert-large-uncased | ef7c11b75a17ea8ca8f28a096ad09581c6b5cc5f | 2021-05-19T13:29:35.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/SNLI_bert-large-uncased | 6 | null | transformers | 15,151 | Entry not found |
bs-modeling-metadata/html-metadata-exp1-subexp1-1857108 | 587ff5a530a75d5cc269dfcdcc1d2326c54c3e00 | 2021-11-13T09:25:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | bs-modeling-metadata | null | bs-modeling-metadata/html-metadata-exp1-subexp1-1857108 | 6 | null | transformers | 15,152 | ---
widget:
- text: " htmlOn ||| <div"
---
# Work In Progress
# How to use?
To generate text with HTML, the sentence must start with ` htmlOn |||` (note the space at the beginning 😉). To generate normal text, you don't need to add anything.
# Training details
We continued the pre-training of [gpt2](https://huggingface.co/gpt2).
Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all)
50% of the examples in the training data contained all HTML tags with only their `id` and `class` attributes. 50% of the examples were just plain text.
Training example with metadata:
```
htmlOn |||
<li id:n-sitesupport><a>Donate to Wikipedia</a></li>
<li id:n-shoplink><a>Wikipedia store</a></li>
</ul></div></div><div class:portal id:p-interaction><h3 id:p-interaction-label>Interaction</h3>
<div class:body><ul><li id:n-help><a>Help</a></li>
<li id:n-aboutsite><a>About Wikipedia</a></li>
<li id:n-portal><a>Community portal</a></li>
<li id:n-recentchanges><a>Recent changes</a></li>
<li id:n-contactpage><a>Contact page</a></li>
</ul></div></div><div class:portal id:p-tb><h3 id:p-tb-label>Tools</h3>
<div class:body><ul><li id:t-whatlinkshere><a>What links here</a></li>
<li id:t-recentchangeslinked><a>Related changes</a></li>
<li id:t-upload><a>Upload file</a></li>
<li id:t-specialpages><a>Special pages</a></li>
<li id:t-permalink><a>Permanent link</a></li>
<li id:t-info><a>Page information</a></li>
<li id:t-wikibase><a>Wikidata item</a></li>
<li id:t-cite><a>Cite this page</a></li>
</ul></div></div><div class:portal id:p-coll-print_export><h3 id:p-coll-print_export-label>Print/export</h3>
<div class:body><ul><li id:coll-create_a_book><a>Create a book</a></li>
<li id:coll-download-as-rdf2latex><a>Download as PDF</a></li>
<li id:t-print><a>Printable version</a></li>
</ul></div></div><div class:portal id:p-lang><h3 id:p-lang-label>Languages</h3>
<div class:body><ul><li class:interlanguage-link interwiki-ca><a class:interlanguage-link-target>Català</a></li>
<li class:interlanguage-link interwiki-da><a class:interlanguage-link-target>Dansk</a></li>
<li class:interlanguage-link interwiki-de><a class:interlanguage-link-target>Deutsch</a></li>
<li class:interlanguage-link interwiki-es><a class:interlanguage-link-target>Español</a></li>
<li class:interlanguage-link interwiki-eu><a class:interlanguage-link-target>Euskara</a></li>
<li class:interlanguage-link interwiki-fa><a class:interlanguage-link-target>فارسی</a></li>
<li class:interlanguage-link interwiki-fr><a class:interlanguage-link-target>Français</a></li>
<li class:interlanguage-link interwiki-id><a class:interlanguage-link-target>Bahasa Indonesia</a></li>
<li class:interlanguage-link interwiki-nl><a class:interlanguage-link-target>Nederlands</a></li>
<li class:interlanguage-link interwiki-pt><a class:interlanguage-link-target>Português</a></li>
<li class:interlanguage-link interwiki-fi><a class:interlanguage-link-target>Suomi</a></li>
<li class:interlanguage-link interwiki-vi><a class:interlanguage-link-target>Tiếng Việt</a></li>
<button class:mw-interlanguage-selector mw-ui-button>5 more</button>
</ul><div class:after-portlet after-portlet-lang><span class:wb-langlinks-edit wb-langlinks-link><a class:wbc-editpage>Edit links</a></span></div>
</div></div></
```
|
bsc/roberta-base-ca-cased | d07aef1e3bf1e988ce41c8dafa592751ad64b10a | 2021-09-06T16:22:51.000Z | [
"pytorch",
"ca",
"masked-lm",
"BERTa",
"catalan",
"license:apache-2.0"
]
| null | false | bsc | null | bsc/roberta-base-ca-cased | 6 | 1 | null | 15,153 | ---
language: "ca"
tags:
- masked-lm
- BERTa
- catalan
widget:
- text: "El Català és una llengua molt <mask>."
- text: "Salvador Dalí va viure a <mask>."
- text: "La Costa Brava té les millors <mask> d'Espanya."
- text: "El cacaolat és un batut de <mask>."
- text: "<mask> és la capital de la Garrotxa."
- text: "Vaig al <mask> a buscar bolets."
- text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat."
- text: "Catalunya és una referència en <mask> a nivell europeu."
license: apache-2.0
---
# BERTa: RoBERTa-based Catalan language model
<font size="+2">
<strong>
<span style="color:red">
WARNING:
</span>
</strong>
</font>
This repository is now superseded by [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca). Future updates will be released in the new repository, so it is highly recommended to load the model using the new path:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("BSC-TeMU/roberta-base-ca")
```
From now on, all models and datasets from the BSC's Text Mining Unit will be published on the [official organization account](https://huggingface.co/BSC-TeMU). |
burmaxwell/Bert_temp | 4adb721ebdb10236a1294527f12bd390a98b7ee3 | 2022-02-21T20:05:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | burmaxwell | null | burmaxwell/Bert_temp | 6 | null | transformers | 15,154 | Entry not found |
byeongal/bart-base | 6690ae39f74cc4054a942175536af3fa1d78da20 | 2021-07-07T05:58:29.000Z | [
"pytorch",
"bart",
"feature-extraction",
"en",
"transformers",
"license:mit"
]
| feature-extraction | false | byeongal | null | byeongal/bart-base | 6 | null | transformers | 15,155 | ---
license: mit
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
language: en
tags:
- bart
---
# BART base model for Teachable NLP
- This model forked from [bart-base](https://huggingface.co/facebook/bart-base) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
The Authors’ code can be found here:
https://github.com/pytorch/fairseq/tree/master/examples/bart
|
caioamb/distilbert-base-uncased-finetuned-cola | 0e328dc438c3941358f45e8f392b49bb648e6f18 | 2021-11-18T21:36:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | caioamb | null | caioamb/distilbert-base-uncased-finetuned-cola | 6 | null | transformers | 15,156 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5166623535745778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7647
- Matthews Correlation: 0.5167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5294 | 1.0 | 535 | 0.5029 | 0.4356 |
| 0.3507 | 2.0 | 1070 | 0.5285 | 0.4884 |
| 0.2406 | 3.0 | 1605 | 0.6550 | 0.5138 |
| 0.1825 | 4.0 | 2140 | 0.7647 | 0.5167 |
| 0.1282 | 5.0 | 2675 | 0.8664 | 0.5074 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
caps1994/DialoGPT-small-chrisbot | 67038eab2009acdb28e06e25340b58c1380ac0e8 | 2021-09-10T20:52:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | caps1994 | null | caps1994/DialoGPT-small-chrisbot | 6 | null | transformers | 15,157 | ---
tags:
- conversational
---
#Chris DialoGPT Model |
celential/erc | 69c3afdb710fb8d06afe542e236fc6dd5e161ac0 | 2020-09-04T10:15:02.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | false | celential | null | celential/erc | 6 | null | transformers | 15,158 | Entry not found |
chinhon/pegasus-multi_news-commentaries_hdwriter | 9731cf8560f98072ab2657522cb5b237e7f97108 | 2022-01-16T10:14:41.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | chinhon | null | chinhon/pegasus-multi_news-commentaries_hdwriter | 6 | null | transformers | 15,159 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-commentaries_hdwriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-commentaries_hdwriter
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7259
- Rouge1: 21.3899
- Rouge2: 6.2409
- Rougel: 16.6172
- Rougelsum: 17.808
- Gen Len: 34.7016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.847 | 1.0 | 4710 | 2.7513 | 20.5559 | 5.9762 | 16.1223 | 17.2872 | 35.81 |
| 2.6399 | 2.0 | 9420 | 2.6890 | 21.2052 | 6.0104 | 16.5753 | 17.6517 | 34.5242 |
| 2.3811 | 3.0 | 14130 | 2.6904 | 21.2358 | 6.1416 | 16.6053 | 17.7067 | 34.6157 |
| 2.2388 | 4.0 | 18840 | 2.7112 | 21.3806 | 6.1895 | 16.6909 | 17.7504 | 34.5227 |
| 2.1589 | 5.0 | 23550 | 2.7259 | 21.3899 | 6.2409 | 16.6172 | 17.808 | 34.7016 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
chisadi/nice-distilbert-v2 | 2cb9112b07e4f30502de1d17014b96fe84414aa8 | 2021-11-02T19:21:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | chisadi | null | chisadi/nice-distilbert-v2 | 6 | null | transformers | 15,160 | ### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad [NICE classifications](https://www.wipo.int/classifications/nice/en/)
|
chmanoj/xls-r-300m-te | 6c8a9b51029f8011debfa3a6bb0bc8cf0507d352 | 2022-03-24T11:53:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"te",
"dataset:openslr",
"dataset:SLR66",
"transformers",
"openslr_SLR66",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | chmanoj | null | chmanoj/xls-r-300m-te | 6 | null | transformers | 15,161 | ---
language:
- te
license: apache-2.0
tags:
- automatic-speech-recognition
- openslr_SLR66
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- openslr
- SLR66
metrics:
- wer
model-index:
- name: xls-r-300m-te
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR66
metrics:
- type: wer
value: 24.695121951219512
name: Test WER
- type: cer
value: 4.861934182322532
name: Test CER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2680
- Wer: 0.3467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.0304 | 4.81 | 500 | 1.5676 | 1.0554 |
| 1.5263 | 9.61 | 1000 | 0.4693 | 0.8023 |
| 1.5299 | 14.42 | 1500 | 0.4368 | 0.7311 |
| 1.5063 | 19.23 | 2000 | 0.4360 | 0.7302 |
| 1.455 | 24.04 | 2500 | 0.4213 | 0.6692 |
| 1.4755 | 28.84 | 3000 | 0.4329 | 0.5943 |
| 1.352 | 33.65 | 3500 | 0.4074 | 0.5765 |
| 1.3122 | 38.46 | 4000 | 0.3866 | 0.5630 |
| 1.2799 | 43.27 | 4500 | 0.3860 | 0.5480 |
| 1.212 | 48.08 | 5000 | 0.3590 | 0.5317 |
| 1.1645 | 52.88 | 5500 | 0.3283 | 0.4757 |
| 1.0854 | 57.69 | 6000 | 0.3162 | 0.4687 |
| 1.0292 | 62.5 | 6500 | 0.3126 | 0.4416 |
| 0.9607 | 67.31 | 7000 | 0.2990 | 0.4066 |
| 0.9156 | 72.12 | 7500 | 0.2870 | 0.4009 |
| 0.8329 | 76.92 | 8000 | 0.2791 | 0.3909 |
| 0.7979 | 81.73 | 8500 | 0.2770 | 0.3670 |
| 0.7144 | 86.54 | 9000 | 0.2841 | 0.3661 |
| 0.6997 | 91.35 | 9500 | 0.2721 | 0.3485 |
| 0.6568 | 96.15 | 10000 | 0.2681 | 0.3437 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
choondrise/emolve | 517269ec2c4d8e75a5bf811417a0be037ae3de41 | 2022-01-18T22:13:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | choondrise | null | choondrise/emolve | 6 | null | transformers | 15,162 | Entry not found |
chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents | 2ea93b79903ce92d36caaf425d6bdd8ce402d335 | 2021-09-27T19:10:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | chrommium | null | chrommium/rubert-base-cased-sentence-finetuned-sent_in_news_sents | 6 | null | transformers | 15,163 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_news_sents
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7224199288256228
- name: F1
type: f1
value: 0.5137303178348194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_news_sents
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9506
- Accuracy: 0.7224
- F1: 0.5137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 14
- eval_batch_size: 14
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 1.0045 | 0.6690 | 0.1388 |
| No log | 2.0 | 162 | 0.9574 | 0.6228 | 0.2980 |
| No log | 3.0 | 243 | 1.0259 | 0.6477 | 0.3208 |
| No log | 4.0 | 324 | 1.1262 | 0.6619 | 0.4033 |
| No log | 5.0 | 405 | 1.3377 | 0.6299 | 0.3909 |
| No log | 6.0 | 486 | 1.5716 | 0.6868 | 0.3624 |
| 0.6085 | 7.0 | 567 | 1.6286 | 0.6762 | 0.4130 |
| 0.6085 | 8.0 | 648 | 1.6450 | 0.6940 | 0.4775 |
| 0.6085 | 9.0 | 729 | 1.7108 | 0.7224 | 0.4920 |
| 0.6085 | 10.0 | 810 | 1.8792 | 0.7046 | 0.5028 |
| 0.6085 | 11.0 | 891 | 1.8670 | 0.7153 | 0.4992 |
| 0.6085 | 12.0 | 972 | 1.8856 | 0.7153 | 0.4934 |
| 0.0922 | 13.0 | 1053 | 1.9506 | 0.7224 | 0.5137 |
| 0.0922 | 14.0 | 1134 | 2.0363 | 0.7189 | 0.4761 |
| 0.0922 | 15.0 | 1215 | 2.0601 | 0.7224 | 0.5053 |
| 0.0922 | 16.0 | 1296 | 2.0813 | 0.7153 | 0.5038 |
| 0.0922 | 17.0 | 1377 | 2.0960 | 0.7189 | 0.5065 |
| 0.0922 | 18.0 | 1458 | 2.1060 | 0.7224 | 0.5098 |
| 0.0101 | 19.0 | 1539 | 2.1153 | 0.7260 | 0.5086 |
| 0.0101 | 20.0 | 1620 | 2.1187 | 0.7260 | 0.5086 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
clayfox/DialoGPT-medium-Hiccup | cc89b5b1e208805f61e756a698ae751e88cb35ed | 2021-11-28T23:20:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | clayfox | null | clayfox/DialoGPT-medium-Hiccup | 6 | null | transformers | 15,164 | ---
tags:
- conversational
---
# hiccupBot medium GPT |
clee7/layoutlm-finetune-sroie | a1a16cd28179433276517a92420bfc12bdef922f | 2021-09-18T02:19:18.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | clee7 | null | clee7/layoutlm-finetune-sroie | 6 | null | transformers | 15,165 | Entry not found |
clem/autonlp-test3-2101782 | 096959098c471d43246176cee7dae24f8a85151b | 2021-06-29T04:19:34.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:clem/autonlp-data-test3",
"transformers",
"autonlp"
]
| text-classification | false | clem | null | clem/autonlp-test3-2101782 | 6 | null | transformers | 15,166 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- clem/autonlp-data-test3
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 2101782
## Validation Metrics
- Loss: 0.015991805121302605
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/clem/autonlp-test3-2101782
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("clem/autonlp-test3-2101782", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
climatebert/distilroberta-base-climate-d | 72f751911614676b6129416de8e5aa777071a517 | 2021-10-26T08:22:01.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:2110.12010",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | climatebert | null | climatebert/distilroberta-base-climate-d | 6 | 2 | transformers | 15,167 | ---
language: en
license: apache-2.0
---
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
### BibTeX entry and citation info
```bibtex
@article{wkbl2021,
title={ClimateBERT: A Pretrained Language Model for Climate-Related Text},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
journal={arXiv preprint arXiv:2110.12010},
year={2021}
}
``` |
codingJacob/distilbert-base-uncased-finetuned-ner | a341de129b927e66a648de2cccbe514a113d646b | 2022-07-26T06:35:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | codingJacob | null | codingJacob/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 15,168 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9843042559613643
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9272
- Recall: 0.9382
- F1: 0.9327
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2432 | 1.0 | 878 | 0.0689 | 0.9132 | 0.9203 | 0.9168 | 0.9813 |
| 0.0507 | 2.0 | 1756 | 0.0608 | 0.9208 | 0.9346 | 0.9276 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0611 | 0.9272 | 0.9382 | 0.9327 | 0.9843 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
cogito233/distilbert-base-uncased-finetuned-ner | a7a0147cf94260d98e9c949ba689d7d8d1ca8695 | 2021-08-17T10:12:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | cogito233 | null | cogito233/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 15,169 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9837323462595516
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0605
- Precision: 0.9251
- Recall: 0.9357
- F1: 0.9304
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2402 | 1.0 | 878 | 0.0694 | 0.9168 | 0.9215 | 0.9191 | 0.9814 |
| 0.051 | 2.0 | 1756 | 0.0595 | 0.9249 | 0.9330 | 0.9289 | 0.9833 |
| 0.0302 | 3.0 | 2634 | 0.0605 | 0.9251 | 0.9357 | 0.9304 | 0.9837 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
cointegrated/rut5-small-chitchat2 | c76783c3b53253c077b33947be271677625adfd1 | 2022-01-16T19:40:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | cointegrated | null | cointegrated/rut5-small-chitchat2 | 6 | null | transformers | 15,170 | A version of https://huggingface.co/cointegrated/rut5-small-chitchat which is more dull but less toxic. |
damlab/HIV_V3_Coreceptor | fdafbd5a16b876b331d494a913b3429c2fc01aa8 | 2022-02-24T18:34:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | damlab | null | damlab/HIV_V3_Coreceptor | 6 | null | transformers | 15,171 | ---
license: mit
widget:
- text: 'C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C'
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
- text: 'C T R P N N N T R R S I R I G P G Q A F Y A T G D I I G D I R Q A H C'
- text: 'C G R P N N H R I K G L R I G P G R A F F A M G A I G G G E I R Q A H C'
---
# HIV_V3_coreceptor model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Coreceptor model was trained as a refinement of the [HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) and serves to better predict HIV V3 coreceptor tropism. HIV-BERT is a model refined from the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of V3 coreceptor tropism than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Coreceptor model is intended to predict the Co-receptor tropism of HIV from a segment of the envelope protein. These envelope proteins encapsulate the virus and interact with the host cell through the human CD4 receptor. HIV then requires the interaction of one, of two, co-receptors: CCR5 or CXCR4. The availability of these co-receptors on different cell types allows the virus to invade different areas of the body and evade antiretroviral therapy. The 3rd variable loop of the envelope protein, the V3 loop, is responsible for this interaction. Given a V3 loop sequence, the HIV-BERT-Coreceptor model will predict the likelihood of binding to each of these co-receptors.
## Intended Uses & Limitations
This tool can be used as a predictor of HIV tropism from the Env-V3 loop. It can recognize both R5, X4, and dual tropic viruses natively. It should not be considered a clinical diagnostic tool.
This tool was trained using the [Los Alamos HIV sequence dataset](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
*Need to add*
## Training Data
This model was trained using the [damlab/HIV_V3_coreceptor dataset](https://huggingface.co/datasets/damlab/HIV_V3_coreceptor) using the 0th fold. The dataset consists of 2935 V3 sequences (approximately 35 tokens each) extracted from the [Los Alamos HIV Sequence database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html).
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd model](https://huggingface.co/Rostlab/prot_bert_bfd), the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The [damlab/HIV-BERT model](https://huggingface.co/damlab/HIV_BERT) was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can bind to CCR5, CXCR4, neither, or both) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
danasone/bart-small-ru-en | 701f1d01e2658527e052bd3c83515eb14440f220 | 2022-01-19T06:13:26.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | danasone | null | danasone/bart-small-ru-en | 6 | 1 | transformers | 15,172 | Entry not found |
danildany/DialoGPT-small-MichaelScott | 0a9fe60182e2d06a5732804f7ec01d15e2ed2306 | 2021-08-30T16:13:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | danildany | null | danildany/DialoGPT-small-MichaelScott | 6 | null | transformers | 15,173 | ---
tags:
- conversational
---
# Michael Scott DialoGPT Model |
danlou/distilbert-base-uncased-finetuned-rte | 37fb093db125d8f2bb6d013347dc25406354eed8 | 2022-02-07T16:25:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | danlou | null | danlou/distilbert-base-uncased-finetuned-rte | 6 | null | transformers | 15,174 | Testing |
danny481/Final_ChatBot | 3bf6fa868962cd1c32ac6e908c5bdd7c2cc74b65 | 2021-12-29T16:59:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | danny481 | null | danny481/Final_ChatBot | 6 | null | transformers | 15,175 | ---
tags:
- conversational
---
#ChatBot updated by datng |
daveccampbell/xlm-roberta-base-finetuned-marc-en | 5fd37eb8abda833a6f5c9135c5ff791682380abe | 2021-10-22T13:20:31.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | daveccampbell | null | daveccampbell/xlm-roberta-base-finetuned-marc-en | 6 | null | transformers | 15,176 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9199
- Mae: 0.4756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1705 | 1.0 | 235 | 0.9985 | 0.5854 |
| 0.9721 | 2.0 | 470 | 0.9199 | 0.4756 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
dbmdz/bert-base-historic-english-cased | de65619ffdc0218498a2c99774854f2273eaebc1 | 2021-11-18T21:30:42.000Z | [
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"english",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | dbmdz | null | dbmdz/bert-base-historic-english-cased | 6 | 1 | transformers | 15,177 | ---
language: english
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/bert-tiny-historic-multilingual-cased | 4a896955b174aae28e343620109a3dd56f978e14 | 2021-12-06T14:11:24.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | dbmdz | null | dbmdz/bert-tiny-historic-multilingual-cased | 6 | null | transformers | 15,178 | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbragdon/noamlm | e0b9f917093cceaf01ca68d23453da9da738aa2c | 2021-06-10T17:15:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | dbragdon | null | dbragdon/noamlm | 6 | null | transformers | 15,179 | Language model fine-tuned on the articles and speeches of Noam Chomsky. |
deepdml/wav2vec2-large-xls-r-300m-basque | cf8ac932da66e732c377bd89594adaa1fa8b7bc4 | 2022-03-23T18:33:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"eu",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"basque",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | deepdml | null | deepdml/wav2vec2-large-xls-r-300m-basque | 6 | null | transformers | 15,180 | ---
license: apache-2.0
language: eu
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- basque
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xls-r-300m-basque
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: eu
metrics:
- name: Test WER
type: wer
value: 51.89
- name: Test CER
type: cer
value: 10.01
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basque
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4276
- Wer: 0.5962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9902 | 1.29 | 400 | 2.1257 | 1.0 |
| 0.9625 | 2.59 | 800 | 0.5695 | 0.7452 |
| 0.4605 | 3.88 | 1200 | 0.4276 | 0.5962 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
deeq/dbert-eth2 | ff8435ef2266c7f17ea1006dc0b2aa3bfbfc4dc9 | 2021-08-02T09:11:55.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | deeq | null | deeq/dbert-eth2 | 6 | null | transformers | 15,181 | Entry not found |
deval/distilbert-base-uncased-finetuned-ner | 3c4252e15ec7bbc5d809f2960e33057786eac7d9 | 2021-09-14T19:10:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | deval | null | deval/distilbert-base-uncased-finetuned-ner | 6 | null | transformers | 15,182 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9276788676324229
- name: Recall
type: recall
value: 0.9384718648618414
- name: F1
type: f1
value: 0.9330441552663775
- name: Accuracy
type: accuracy
value: 0.9843836878643939
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9277
- Recall: 0.9385
- F1: 0.9330
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2454 | 1.0 | 878 | 0.0692 | 0.9106 | 0.9212 | 0.9159 | 0.9809 |
| 0.0517 | 2.0 | 1756 | 0.0616 | 0.9203 | 0.9352 | 0.9277 | 0.9834 |
| 0.0314 | 3.0 | 2634 | 0.0606 | 0.9277 | 0.9385 | 0.9330 | 0.9844 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
diegozs97/finetuned-sciie-seed-0-60k | 3d1ce11d5e5ea4d1b001d076acbabd98d316e5c9 | 2021-12-10T01:41:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-60k | 6 | null | transformers | 15,183 | Entry not found |
diegozs97/finetuned-sciie-seed-4-1000k | f329ea990270c247cdd062de1943ff18c668b492 | 2021-12-10T01:56:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-1000k | 6 | null | transformers | 15,184 | Entry not found |
diegozs97/finetuned-sciie-seed-4-1500k | 286bc0dc8c6b49f24d641782d02730467f40aa37 | 2021-12-10T01:57:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-1500k | 6 | null | transformers | 15,185 | Entry not found |
diegozs97/finetuned-sciie-seed-4-1800k | e55cb9ff78b81c91a6ac9ced52c54d8a010db63a | 2021-12-10T01:57:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-1800k | 6 | null | transformers | 15,186 | Entry not found |
diegozs97/finetuned-sciie-seed-4-200k | d018349fd2388db6d9a3b3dbd1959e259c855955 | 2021-12-10T01:53:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-200k | 6 | null | transformers | 15,187 | Entry not found |
diegozs97/finetuned-sciie-seed-4-400k | 020ee437bf28354f787168aacc1b29ade9f1105f | 2021-12-10T01:53:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-400k | 6 | null | transformers | 15,188 | Entry not found |
diegozs97/finetuned-sciie-seed-4-700k | b3bf11b6fc1aedf13334d501263bd91a932151de | 2021-12-10T01:54:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-4-700k | 6 | null | transformers | 15,189 | Entry not found |
diwank/maptask-deberta-pair | 35136885a24803c59904be7822781b9181189347 | 2022-02-03T12:51:24.000Z | [
"pytorch",
"tf",
"deberta",
"text-classification",
"transformers",
"license:mit"
]
| text-classification | false | diwank | null | diwank/maptask-deberta-pair | 6 | null | transformers | 15,190 | ---
license: mit
---
# maptask-deberta-pair
Deberta-based Daily MapTask style dialog-act annotations classification model
## Example
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/maptask-deberta-pair")
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label = lambda n: ["acknowledge (0), align (1), check (2), clarify (3), explain (4), instruct (5), query_w (6), query_yn (7), ready (8), reply_n (9), reply_w (10), reply_y (11)".split(', ')[i] for i in n]
convert_to_label(predictions) # reply_n (9)
``` |
dkleczek/papuGaPT2-finetuned-wierszyki | 361d5186b914bf8e4c4c8eb134eb985ee5305240 | 2021-10-23T20:37:11.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | dkleczek | null | dkleczek/papuGaPT2-finetuned-wierszyki | 6 | null | transformers | 15,191 | ---
tags:
- generated_from_trainer
model-index:
- name: papuGaPT2-finetuned-wierszyki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# papuGaPT2-finetuned-wierszyki
This model is a fine-tuned version of [flax-community/papuGaPT2](https://huggingface.co/flax-community/papuGaPT2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 202 | 2.8122 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
dtam/autonlp-covid-fake-news-36839110 | 66fd6dede0964ce55e7b9a3af1826ef1a8eee4b8 | 2021-11-29T05:58:03.000Z | [
"pytorch",
"albert",
"text-classification",
"unk",
"dataset:dtam/autonlp-data-covid-fake-news",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | dtam | null | dtam/autonlp-covid-fake-news-36839110 | 6 | null | transformers | 15,192 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- dtam/autonlp-data-covid-fake-news
co2_eq_emissions: 123.79523392848652
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36839110
- CO2 Emissions (in grams): 123.79523392848652
## Validation Metrics
- Loss: 0.17188367247581482
- Accuracy: 0.9714953271028037
- Precision: 0.9917948717948718
- Recall: 0.9480392156862745
- AUC: 0.9947452731092438
- F1: 0.9694235588972432
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dtam/autonlp-covid-fake-news-36839110
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
dukeme/DialoGPT-small-RDBotv1 | afc8d39096b446847f5f0c4aa860c3049b721558 | 2021-10-25T16:06:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | dukeme | null | dukeme/DialoGPT-small-RDBotv1 | 6 | null | transformers | 15,193 | ---
tags:
- conversational
---
# RDBotv1 DialoGPT Model |
ehddnr301/bert-base-ehddnr-ynat | 60295ac85ce60ca131b7c95bb8a9b853a09a0381 | 2021-08-05T06:28:30.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer"
]
| text-classification | false | ehddnr301 | null | ehddnr301/bert-base-ehddnr-ynat | 6 | null | transformers | 15,194 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model_index:
- name: bert-base-ehddnr-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metric:
name: F1
type: f1
value: 0.8720568553403009
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-ehddnr-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3587
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4398 | 0.8548 |
| No log | 2.0 | 358 | 0.3587 | 0.8721 |
| 0.3859 | 3.0 | 537 | 0.3639 | 0.8707 |
| 0.3859 | 4.0 | 716 | 0.3592 | 0.8692 |
| 0.3859 | 5.0 | 895 | 0.3646 | 0.8717 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
eliasbe/XLMR-ENIS-finetuned-ner | 5816eb5d2c01fa17b483d95f9ed289d317599bf9 | 2021-10-05T14:03:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | eliasbe | null | eliasbe/XLMR-ENIS-finetuned-ner | 6 | null | transformers | 15,195 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.9002453676283949
- name: Recall
type: recall
value: 0.896
- name: F1
type: f1
value: 0.8981176669198953
- name: Accuracy
type: accuracy
value: 0.9843747637694087
widget:
- text: systurnar guðrún og monique voru einar í skóginum umkringdar víði, eik og reyni með þá ósk að sameinast fjölskyldu sinni sem fór á mai thai og í bíó paradís að sjá jim carey leika í the eternal sunshine of the spotless mind.
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0827
- Precision: 0.9002
- Recall: 0.896
- F1: 0.8981
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0567 | 1.0 | 2904 | 0.1081 | 0.8486 | 0.8140 | 0.8309 | 0.9796 |
| 0.0302 | 2.0 | 5808 | 0.0906 | 0.8620 | 0.8298 | 0.8456 | 0.9818 |
| 0.0197 | 3.0 | 8712 | 0.0948 | 0.8691 | 0.8447 | 0.8567 | 0.9826 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
emrecan/bert-base-turkish-cased-snli_tr | d5ee7342154498a499fddb6b1a42b4a027b023ae | 2021-12-01T10:49:12.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/bert-base-turkish-cased-snli_tr | 6 | null | transformers | 15,196 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
enelpi/electra-base-discriminator-finetuned_squadv1_tr | f550c0ad520cf855a30e3da6780b48d9c5c81e03 | 2020-07-31T16:45:58.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | enelpi | null | enelpi/electra-base-discriminator-finetuned_squadv1_tr | 6 | null | transformers | 15,197 | Entry not found |
enelpol/czywiesz-context | d8239eb81d260499a0c2886d981f026b944719c7 | 2021-12-21T21:25:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"pl",
"dataset:enelpol/czywiesz",
"transformers"
]
| feature-extraction | false | enelpol | null | enelpol/czywiesz-context | 6 | null | transformers | 15,198 | ---
language: pl
datasets:
- enelpol/czywiesz
---
# Model description
The model was created for selective question answering in Polish. I.e. it is used to find passages containing the answers to the given question.
It is used to encode the contexts (aka passages) in the DPR bi-encoder architecture. The architecture requires two separate models.
The question part has to be encoded with the corresponding [question encoder](https://huggingface.co/enelpol/czywiesz-question).
The model was created by fine-tuning [Herbert base cased](https://huggingface.co/allegro/herbert-base-cased) on "Czywiesz" dataset.
[Czywiesz](https://clarin-pl.eu/dspace/handle/11321/39) dataset contains questions and Wikipedia articles extracted from the Polish Wikipedia.
# Usage
It is the easiest to use the model with the [Haystack framework](https://haystack.deepset.ai/overview/intro).
```python
from haystack.document_stores import FAISSDocumentStore
from haystack.retriever import DensePassageRetriever
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="enelpol/czywiesz-question",
passage_embedding_model="enelpol/czywiesz-context"
)
for document in documents:
document_store.write_documents([document])
document_store.udpate_embeddings(retriever)
document_store.save("contexts.faiss")
``` |
erwanlc/t5-cocktails_recipe-base | 874f12daf784a3db1a98128cd8cb17854fb33400 | 2022-01-17T12:58:20.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | erwanlc | null | erwanlc/t5-cocktails_recipe-base | 6 | null | transformers | 15,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-cocktails_recipe-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cocktails_recipe-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.