modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
arampacha/DialoGPT-medium-simpsons | 65c7c2888bc922202735dea59d8990bd45425df6 | 2021-08-04T14:41:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | arampacha | null | arampacha/DialoGPT-medium-simpsons | 5 | 1 | transformers | 16,400 | ---
tags:
- conversational
---
# DialoGPT-medium-simpsons
This is a version of [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) fine-tuned on The Simpsons scripts. |
aristotletan/sc-distilbert | 5d12da6bd594fed7634b07ee52e7afa4e63c6148 | 2021-04-19T03:04:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aristotletan | null | aristotletan/sc-distilbert | 5 | null | transformers | 16,401 | Entry not found |
aristotletan/scim-distillbert | fb85223bb551dd5b1ab4609fdb373b8903c8b3c6 | 2021-04-19T05:32:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | aristotletan | null | aristotletan/scim-distillbert | 5 | null | transformers | 16,402 | Entry not found |
arjun3816/autonlp-pegas_large_samsum-15892673 | f88d5c51153179bbdb883027d83fb801fc660f37 | 2021-10-07T15:05:32.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:arjun3816/autonlp-data-pegas_large_samsum",
"transformers",
"autonlp",
"autotrain_compatible"
]
| text2text-generation | false | arjun3816 | null | arjun3816/autonlp-pegas_large_samsum-15892673 | 5 | null | transformers | 16,403 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- arjun3816/autonlp-data-pegas_large_samsum
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15892673
## Validation Metrics
- Loss: 1.3661842346191406
- Rouge1: 50.8868
- Rouge2: 26.996
- RougeL: 42.9088
- RougeLsum: 46.6748
- Gen Len: 20.716
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-pegas_large_samsum-15892673
``` |
arnolfokam/roberta-base-swa | c7fe48353e0ecb9171ff813908219ea820154d9c | 2021-11-24T11:41:03.000Z | [
"pytorch",
"roberta",
"token-classification",
"swa",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/roberta-base-swa | 5 | null | transformers | 16,404 | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**roberta-base-swa** is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**roberta-base-swa**| 80.58 | 86.79 | 83.57
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-swa")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
arogyaGurkha/koelectra-base-discriminator-finetuned-squad_kor_v1 | 32f5d2ea5f5073c585ecbcdfc87dcd2f6dae370c | 2021-09-11T08:34:39.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:squad_kor_v1",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | arogyaGurkha | null | arogyaGurkha/koelectra-base-discriminator-finetuned-squad_kor_v1 | 5 | null | transformers | 16,405 | ---
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: koelectra-base-discriminator-finetuned-squad_kor_v1
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad_kor_v1
type: squad_kor_v1
args: squad_kor_v1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-base-discriminator-finetuned-squad_kor_v1
This model is a fine-tuned version of [monologg/koelectra-base-discriminator](https://huggingface.co/monologg/koelectra-base-discriminator) on the squad_kor_v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5774 | 1.0 | 4025 | 0.5589 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
lmqg/bart-large-squad-default | dabb36d74d01536b8a1381027ab637566de6d9da | 2022-06-01T00:21:11.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question generation",
"question answer generation",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | lmqg | null | lmqg/bart-large-squad-default | 5 | null | transformers | 16,406 | ---
language:
- en
tags:
- question generation
- question answer generation
license: mit
datasets:
- squad
metrics:
- bleu
- meteor
- rouge
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# T5 finetuned on Question Generation
T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail.
|
lmqg/t5-base-squad-default | 9d86dfb2fc5a7b37f2dff741621e4527b1e70f0a | 2022-06-01T00:21:43.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"question generation",
"question answer generation",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | lmqg | null | lmqg/t5-base-squad-default | 5 | null | transformers | 16,407 | ---
language:
- en
tags:
- question generation
- question answer generation
license: mit
datasets:
- squad
metrics:
- bleu
- meteor
- rouge
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# T5 finetuned on Question Generation
T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail. |
asahi417/lmqg-t5-base-squad | 97bd8ba5a0a4b14327fa599466ede49423ced1dd | 2022-06-09T18:14:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
]
| text2text-generation | false | asahi417 | null | asahi417/lmqg-t5-base-squad | 5 | null | transformers | 16,408 | ---
language: en
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
pipeline_tag: text2text-generation
---
# T5 BASE fine-tuned for English Question Generation
T5 BASE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** t5-base
**Language:** English (en)
**Downstream-task:** Question Generation
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-t5-base-squad'
pipe = pipeline("text2text-generation", model_path)
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 26.12 | 53.33 | 26.96 | 90.59 | 64.74 |
- [metric file](https://huggingface.co/asahi417/lmqg-t5-base-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-t5-base-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
asini/wav2vec_tuto | 008553bc941b7a0c864002e26110a5bb752bcccf | 2022-02-28T09:22:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | asini | null | asini/wav2vec_tuto | 5 | null | transformers | 16,409 | Entry not found |
athar/distilbert-base-uncased-finetuned-cola | 9f6b5675703e15841c8c6b824b849a03dbf5e648 | 2021-10-13T23:50:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | athar | null | athar/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,410 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5451837431775948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8508
- Matthews Correlation: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 |
| 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 |
| 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 |
| 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 |
| 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
|
auychai/distilbert-base-uncased-finetuned-emotion | a8a7acdda010c3631c34b5b3fedc6f15cdcfd51f | 2021-12-24T11:58:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | auychai | null | auychai/distilbert-base-uncased-finetuned-emotion | 5 | null | transformers | 16,411 | Entry not found |
aviator-neural/bert-base-uncased-sst2 | faf75b9164ca81f6c051646ec3adf55486d551cf | 2022-01-20T12:00:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | aviator-neural | null | aviator-neural/bert-base-uncased-sst2 | 5 | null | transformers | 16,412 | Entry not found |
aviator-neural/gpt2-donald_trump | b85afa0a22bb0a27b83eb7a43418885b693b4236 | 2022-01-24T22:09:58.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | aviator-neural | null | aviator-neural/gpt2-donald_trump | 5 | null | transformers | 16,413 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-donald_trump
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-donald_trump
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 2.8721 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ayameRushia/wav2vec2-large-xls-r-300m-el | 14ca917d055814167cee709043950157d8b22934 | 2022-05-09T01:56:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ayameRushia | null | ayameRushia/wav2vec2-large-xls-r-300m-el | 5 | null | transformers | 16,414 | ---
language:
- el
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-el
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: el
metrics:
- name: Test WER using LM
type: wer
value: 20.9
- name: Test CER using LM
type: cer
value: 6.0466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3218
- Wer: 0.3095
## Training and evaluation data
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_el.ipynb"
Test WER without LM
wer = 31.1294 %
cer = 7.9509 %
Test WER using LM
wer = 20.7340 %
cer = 6.0466 %
How to use eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
!python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test
# running without LM
!python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test --greedy
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 80.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.3683 | 8.77 | 500 | 3.1280 | 1.0 |
| 1.9915 | 17.54 | 1000 | 0.6600 | 0.6444 |
| 0.6565 | 26.32 | 1500 | 0.4208 | 0.4486 |
| 0.4484 | 35.09 | 2000 | 0.3885 | 0.4006 |
| 0.3573 | 43.86 | 2500 | 0.3548 | 0.3626 |
| 0.3063 | 52.63 | 3000 | 0.3375 | 0.3430 |
| 0.2751 | 61.4 | 3500 | 0.3359 | 0.3241 |
| 0.2511 | 70.18 | 4000 | 0.3222 | 0.3108 |
| 0.2361 | 78.95 | 4500 | 0.3205 | 0.3084 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bada/test_gpt | 15bce292b158715989fe5c79f352003d28cfbbb3 | 2021-05-21T13:52:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | bada | null | bada/test_gpt | 5 | null | transformers | 16,415 | Entry not found |
begar/xlm-roberta-base-finetuned-marc | f72e36f9da9ec87750ac8a05181bab4bf3ee795b | 2022-01-08T11:35:02.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | begar | null | begar/xlm-roberta-base-finetuned-marc | 5 | null | transformers | 16,416 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0276
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1582 | 1.0 | 308 | 1.0625 | 0.5221 |
| 1.0091 | 2.0 | 616 | 1.0276 | 0.5310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
benschlagman/tapas_fine_tuning | a3e35033d43987f95230c764b3812341c2b5a6be | 2022-01-28T17:02:43.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"transformers"
]
| table-question-answering | false | benschlagman | null | benschlagman/tapas_fine_tuning | 5 | null | transformers | 16,417 | Entry not found |
beomi/beep-kcbert-base-hate | e901938f58ad0ab8024f7b2a1658c11d04dad8d0 | 2021-10-23T05:53:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | beomi | null | beomi/beep-kcbert-base-hate | 5 | null | transformers | 16,418 | Entry not found |
beomi/detox-kcbert-base | 32ad7a48d09f59ca5d16c676b4d888b028b3d300 | 2021-08-20T09:10:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | beomi | null | beomi/detox-kcbert-base | 5 | null | transformers | 16,419 | Entry not found |
beomus/lotr-gpt | 27d8583f2e6abcfcff8718dedc458f1a359305d9 | 2021-09-09T07:34:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | beomus | null | beomus/lotr-gpt | 5 | null | transformers | 16,420 | Entry not found |
binwang/xlnet-base-cased | 553f39a80df4454432399c22ec250a54047acbfc | 2020-12-11T21:34:38.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
]
| text-generation | false | binwang | null | binwang/xlnet-base-cased | 5 | null | transformers | 16,421 | This model is pre-trained **XLNET** with 12 layers.
It comes with paper: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models
Project Page: [SBERT-WK](https://github.com/BinWang28/SBERT-WK-Sentence-Embedding)
|
blackbird/alberta-base-mnli-v1 | d1ad74838d1fcaaebdd166411032d256e1d71ec9 | 2021-06-04T02:36:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | blackbird | null | blackbird/alberta-base-mnli-v1 | 5 | null | transformers | 16,422 | |
blinjrm/finsent | 89653bd7d2a27ad62845c6a27ab80610fb497a7e | 2021-05-20T14:28:23.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | blinjrm | null | blinjrm/finsent | 5 | null | transformers | 16,423 | Entry not found |
bochaowei/t5-small-finetuned-xsum-wei2 | 4b473ae6974a64b5a19d52828882d0b0f67b6445 | 2021-10-21T07:21:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | bochaowei | null | bochaowei/t5-small-finetuned-xsum-wei2 | 5 | null | transformers | 16,424 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-wei2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 29.2287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-wei2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4131
- Rouge1: 29.2287
- Rouge2: 8.4073
- Rougel: 23.0934
- Rougelsum: 23.0954
- Gen Len: 18.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bochrasaffar/T5_description_generation | 7d689a2e1632b57ff5c773bdc72a2ab2017c5608 | 2021-12-02T11:46:48.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | bochrasaffar | null | bochrasaffar/T5_description_generation | 5 | null | transformers | 16,425 | Entry not found |
boronbrown48/sentiment_others_v1 | 7b8717c5ad7df23bac1767619a6f97723514eb56 | 2021-11-26T09:05:14.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
]
| text-classification | false | boronbrown48 | null | boronbrown48/sentiment_others_v1 | 5 | null | transformers | 16,426 | Entry not found |
boychaboy/MNLI_bert-base-cased_3 | 28a18d4691bc7c6e85a2f53dc5f923d170137e15 | 2021-05-19T13:13:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_bert-base-cased_3 | 5 | null | transformers | 16,427 | Entry not found |
boychaboy/SNLI_bert-base-cased | e700ed44d8bb35240a459eeb81ef1cb8eca3fe4d | 2021-05-19T13:23:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/SNLI_bert-base-cased | 5 | null | transformers | 16,428 | Entry not found |
boychaboy/kobias_klue-bert-base | d1cc940005c6fe287b1add127b1a2e361dd3001e | 2021-07-07T05:02:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/kobias_klue-bert-base | 5 | null | transformers | 16,429 | Entry not found |
bshlgrs/autonlp-classification_with_all_labellers-9532137 | 42bf20bfc611544ecce5bb845d2733d5efa54c90 | 2021-09-04T21:03:27.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:bshlgrs/autonlp-data-classification_with_all_labellers",
"transformers",
"autonlp"
]
| text-classification | false | bshlgrs | null | bshlgrs/autonlp-classification_with_all_labellers-9532137 | 5 | null | transformers | 16,430 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bshlgrs/autonlp-data-classification_with_all_labellers
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 9532137
## Validation Metrics
- Loss: 0.34556105732917786
- Accuracy: 0.8749890724713699
- Macro F1: 0.5243623959669343
- Micro F1: 0.8749890724713699
- Weighted F1: 0.8638030768409057
- Macro Precision: 0.5016762404900895
- Micro Precision: 0.8749890724713699
- Weighted Precision: 0.8547962562614184
- Macro Recall: 0.5529674694200845
- Micro Recall: 0.8749890724713699
- Weighted Recall: 0.8749890724713699
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bshlgrs/autonlp-classification_with_all_labellers-9532137
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bshlgrs/autonlp-classification_with_all_labellers-9532137", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
byeongal/bert-base-uncased | 8982c367fb9c0e1259862c5d5c63ceaafd0b3b97 | 2021-06-11T03:25:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | byeongal | null | byeongal/bert-base-uncased | 5 | null | transformers | 16,431 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased) for Teachable NLP
- This model forked from [bert-base-uncased](https://huggingface.co/bert-base-uncased) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta*{1} = 0.9\\) and \\(\beta*{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
| :--: | :---------: | :--: | :--: | :---: | :--: | :---: | :--: | :--: | :-----: |
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
byeongal/gpt2 | 04d52b2deab9822a606b7775b78f058a90430f08 | 2021-06-22T02:37:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"license:mit"
]
| text-generation | false | byeongal | null | byeongal/gpt2 | 5 | null | transformers | 16,432 | ---
language: en
tags:
- gpt2
license: mit
---
# GPT-2
- This model forked from [gpt2](https://huggingface.co/gpt2) for fine tune [Teachable NLP](https://ainize.ai/teachable-nlp).
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
cahya/wav2vec2-base-turkish-artificial | 510562070480b3e920ba7319a84641815761f475 | 2022-02-02T15:44:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-turkish-artificial | 5 | 1 | transformers | 16,433 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish with Artificial Voices by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 57.60
---
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.60 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cahya/wav2vec2-base-turkish-cv7 | 055befea6cb02e67c2a2bc1f7443f8617221acca | 2022-02-02T22:05:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-turkish-cv7 | 5 | null | transformers | 16,434 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2893
- Wer: 0.2713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.8647 | 14.28 | 200 | 0.2758 | 0.2568 |
| 1.3376 | 28.56 | 400 | 0.2754 | 0.2722 |
| 1.1975 | 42.84 | 600 | 0.2929 | 0.2901 |
| 1.1024 | 57.14 | 800 | 0.2904 | 0.2928 |
| 1.0257 | 71.42 | 1000 | 0.2915 | 0.2823 |
| 0.9628 | 85.7 | 1200 | 0.2936 | 0.2749 |
| 0.9109 | 99.98 | 1400 | 0.2893 | 0.2713 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cahya/wav2vec2-base-turkish-cv8 | 83453d2e9d1471f1ea5ecbd39ba69f53605d612f | 2022-02-04T14:30:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-turkish-cv8 | 5 | 0 | transformers | 16,435 | ---
language:
- tr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.2836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 |
| 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 |
| 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 |
| 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 |
| 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 |
| 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 |
| 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 |
| 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 |
| 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 |
| 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 |
| 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 |
| 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 |
| 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 |
| 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 |
| 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 |
| 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 |
| 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 |
| 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 |
| 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 |
| 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 |
| 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 |
| 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 |
| 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 |
| 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 |
| 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 |
| 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 |
| 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 |
| 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 |
| 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 |
| 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 |
| 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 |
| 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 |
| 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 |
| 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 |
| 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 |
| 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 |
| 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 |
| 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 |
| 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 |
| 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 |
| 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 |
| 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 |
| 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 |
| 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 |
| 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 |
| 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 |
| 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 |
| 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 |
| 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 |
| 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 |
| 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 |
| 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 |
| 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 |
| 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 |
| 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 |
| 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 |
| 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 |
| 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 |
| 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 |
| 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 |
| 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 |
| 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 |
| 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 |
| 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 |
| 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
caioamb/bert-base-uncased-finetuned-md-simpletransformers | 1330c29e3df481ec9b3ca805e78497f73882a828 | 2022-01-12T01:02:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | caioamb | null | caioamb/bert-base-uncased-finetuned-md-simpletransformers | 5 | null | transformers | 16,436 | Entry not found |
carlosserquen/abcd | 60209d685b57c09b53d07a623c548563e42522e8 | 2021-12-06T21:58:04.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | carlosserquen | null | carlosserquen/abcd | 5 | null | transformers | 16,437 | Entry not found |
castorini/afriberta_base | 95b703f498c9cee56be5f5bbc5140022dc86099e | 2022-06-15T18:23:04.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"om",
"am",
"rw",
"rn",
"ha",
"ig",
"pcm",
"so",
"sw",
"ti",
"yo",
"multilingual",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | castorini | null | castorini/afriberta_base | 5 | null | transformers | 16,438 | Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_base
## Model description
AfriBERTa base is a pretrained multilingual language model with around 111 million parameters.
The model has 8 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_base")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_base")
# we have to manually set the model max length because it is an imported sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
castorini/duot5-base-msmarco-10k | 5469cde99ac1fda0c8a1c579bf6bbe18897035b9 | 2021-12-01T20:43:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | castorini | null | castorini/duot5-base-msmarco-10k | 5 | null | transformers | 16,439 | Entry not found |
cbrew475/mpnet-metric | adc3913a06df4f9b83b98a98bc6bef49019e7630 | 2022-02-10T00:27:32.000Z | [
"pytorch",
"mpnet",
"text-classification",
"transformers"
]
| text-classification | false | cbrew475 | null | cbrew475/mpnet-metric | 5 | null | transformers | 16,440 | Entry not found |
celinelee/answer-extraction | 5b489a72ec8a99d916173bd83891d1e25781c499 | 2022-02-22T16:55:40.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | celinelee | null | celinelee/answer-extraction | 5 | null | transformers | 16,441 | Entry not found |
cemdenizsel/51k-finetuned-bert-model | fa0619a5af3f20cbae8da9489af3b5cfd94c6a80 | 2021-06-04T15:20:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cemdenizsel | null | cemdenizsel/51k-finetuned-bert-model | 5 | null | transformers | 16,442 | Entry not found |
cemdenizsel/51k-pretrained-bert-model | 49ae02c7d0d53aba19c7ad9880e13610120cadd5 | 2021-06-04T14:11:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cemdenizsel | null | cemdenizsel/51k-pretrained-bert-model | 5 | null | transformers | 16,443 | Entry not found |
cestwc/roberta-base-unigram-ternary | eb01504d683439056d475b6eb03d4d04235f5378 | 2022-01-01T09:05:18.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | cestwc | null | cestwc/roberta-base-unigram-ternary | 5 | null | transformers | 16,444 | Entry not found |
cfisicaro/distilbert-base-uncased-finetuned-ner | 219f031bb900f8d01386c5285ec15bb5c41e30dd | 2021-09-22T10:25:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | cfisicaro | null | cfisicaro/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,445 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9281908990011098
- name: Recall
type: recall
value: 0.9355632621098557
- name: F1
type: f1
value: 0.9318624993035824
- name: Accuracy
type: accuracy
value: 0.9837641190207635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0629
- Precision: 0.9282
- Recall: 0.9356
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2406 | 1.0 | 878 | 0.0721 | 0.9072 | 0.9172 | 0.9122 | 0.9801 |
| 0.0529 | 2.0 | 1756 | 0.0637 | 0.9166 | 0.9318 | 0.9241 | 0.9826 |
| 0.0315 | 3.0 | 2634 | 0.0629 | 0.9282 | 0.9356 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
chan030609/DialoGPT-small-JAB | ee9384061fcc7406b235706bb243cb99ecf94b9a | 2022-02-10T03:27:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | chan030609 | null | chan030609/DialoGPT-small-JAB | 5 | null | transformers | 16,446 | ---
tags:
- conversational
---
# DialoGPT Small JAB |
chanaa/distilbert-base-uncased-finetuned-ner | 55f4cb1853fc13812b3f23886900ff6ce07cb6a2 | 2022-02-23T16:06:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | chanaa | null | chanaa/distilbert-base-uncased-finetuned-ner | 5 | null | transformers | 16,447 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244263018534863
- name: Recall
type: recall
value: 0.9373531714956931
- name: F1
type: f1
value: 0.930844859190135
- name: Accuracy
type: accuracy
value: 0.9836211415953103
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0609
- Precision: 0.9244
- Recall: 0.9374
- F1: 0.9308
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0732 | 0.9116 | 0.9216 | 0.9166 | 0.9802 |
| 0.0567 | 2.0 | 1756 | 0.0601 | 0.9164 | 0.9331 | 0.9247 | 0.9826 |
| 0.0301 | 3.0 | 2634 | 0.0609 | 0.9244 | 0.9374 | 0.9308 | 0.9836 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
chinhon/pegasus-newsroom-malay_headlines | 03c6564f16d3ce1feb6064eb73fd5d6c6448ef2b | 2021-11-03T00:17:13.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | chinhon | null | chinhon/pegasus-newsroom-malay_headlines | 5 | null | transformers | 16,448 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-malay_headlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-malay_headlines
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6603
- Rouge1: 42.6667
- Rouge2: 22.8739
- Rougel: 38.6684
- Rougelsum: 38.6928
- Gen Len: 34.7995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.9713 | 1.0 | 15310 | 1.8121 | 41.1469 | 21.5262 | 37.3081 | 37.3377 | 35.0939 |
| 1.7917 | 2.0 | 30620 | 1.6913 | 42.4027 | 22.6089 | 38.4471 | 38.4699 | 34.8149 |
| 1.7271 | 3.0 | 45930 | 1.6603 | 42.6667 | 22.8739 | 38.6684 | 38.6928 | 34.7995 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chrommium/helper-model | ca1924df1dcc74ce53677321594a032a2c86e063 | 2021-11-20T21:50:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | chrommium | null | chrommium/helper-model | 5 | null | transformers | 16,449 | Entry not found |
chrommium/rubert-base-cased-sentence-finetuned-sent_in_ru | 7af41d1cc1e6bce8135cb182ff7405a72667de94 | 2021-10-01T22:53:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | chrommium | null | chrommium/rubert-base-cased-sentence-finetuned-sent_in_ru | 5 | null | transformers | 16,450 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-base-cased-sentence-finetuned-sent_in_ru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-base-cased-sentence-finetuned-sent_in_ru
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3503
- Accuracy: 0.6884
- F1: 0.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 441 | 0.7397 | 0.6630 | 0.6530 |
| 0.771 | 2.0 | 882 | 0.7143 | 0.6909 | 0.6905 |
| 0.5449 | 3.0 | 1323 | 0.8385 | 0.6897 | 0.6870 |
| 0.3795 | 4.0 | 1764 | 0.8851 | 0.6939 | 0.6914 |
| 0.3059 | 5.0 | 2205 | 1.0728 | 0.6933 | 0.6953 |
| 0.2673 | 6.0 | 2646 | 1.0673 | 0.7060 | 0.7020 |
| 0.2358 | 7.0 | 3087 | 1.5200 | 0.6830 | 0.6829 |
| 0.2069 | 8.0 | 3528 | 1.3439 | 0.7024 | 0.7016 |
| 0.2069 | 9.0 | 3969 | 1.3545 | 0.6830 | 0.6833 |
| 0.1724 | 10.0 | 4410 | 1.5591 | 0.6927 | 0.6902 |
| 0.1525 | 11.0 | 4851 | 1.6425 | 0.6818 | 0.6823 |
| 0.131 | 12.0 | 5292 | 1.8999 | 0.6836 | 0.6775 |
| 0.1253 | 13.0 | 5733 | 1.6959 | 0.6884 | 0.6877 |
| 0.1132 | 14.0 | 6174 | 1.9561 | 0.6776 | 0.6803 |
| 0.0951 | 15.0 | 6615 | 2.0356 | 0.6763 | 0.6754 |
| 0.1009 | 16.0 | 7056 | 1.7995 | 0.6842 | 0.6741 |
| 0.1009 | 17.0 | 7497 | 2.0638 | 0.6884 | 0.6811 |
| 0.0817 | 18.0 | 7938 | 2.1686 | 0.6884 | 0.6859 |
| 0.0691 | 19.0 | 8379 | 2.0874 | 0.6878 | 0.6889 |
| 0.0656 | 20.0 | 8820 | 2.1772 | 0.6854 | 0.6817 |
| 0.0652 | 21.0 | 9261 | 2.4018 | 0.6872 | 0.6896 |
| 0.0608 | 22.0 | 9702 | 2.2074 | 0.6770 | 0.6656 |
| 0.0677 | 23.0 | 10143 | 2.2101 | 0.6848 | 0.6793 |
| 0.0559 | 24.0 | 10584 | 2.2920 | 0.6848 | 0.6835 |
| 0.0524 | 25.0 | 11025 | 2.3503 | 0.6884 | 0.6875 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
claudelkros/bert-base-french | a9da8638058d8afcf9517b6da6e2f158de0c02a5 | 2020-09-15T00:05:37.000Z | [
"pytorch",
"transformers"
]
| null | false | claudelkros | null | claudelkros/bert-base-french | 5 | null | transformers | 16,451 | Entry not found |
claudio75/xlm-roberta-base-finetuned-marc | 0a1fe2722f555b39b5ea13833abd81d1f81d10ea | 2021-10-16T11:10:29.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | claudio75 | null | claudio75/xlm-roberta-base-finetuned-marc | 5 | null | transformers | 16,452 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9611
- Mae: 0.4749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0431 | 1.0 | 860 | 0.9819 | 0.4985 |
| 0.9079 | 2.0 | 1720 | 0.9611 | 0.4749 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
coldfir3/xlm-roberta-base-finetuned-panx-all | 09598465183bf83c88d8308127d2d04218de65dd | 2022-01-02T19:41:32.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | coldfir3 | null | coldfir3/xlm-roberta-base-finetuned-panx-all | 5 | null | transformers | 16,453 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1759
- F1: 0.8527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3038 | 1.0 | 835 | 0.1922 | 0.8065 |
| 0.1559 | 2.0 | 1670 | 0.1714 | 0.8422 |
| 0.1002 | 3.0 | 2505 | 0.1759 | 0.8527 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
comodoro/wav2vec2-xls-r-300m-cs | 93aff004b5a02878e9eacd29265a22646e3f1727 | 2022-03-23T18:32:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"cs",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | comodoro | null | comodoro/wav2vec2-xls-r-300m-cs | 5 | null | transformers | 16,454 | ---
language:
- cs
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- common_voice
model-index:
- name: Czech comodoro Wav2Vec2 XLSR 300M CV6.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: cs
metrics:
- name: Test WER
type: wer
value: 22.2
- name: Test CER
type: cer
value: 5.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: cs
metrics:
- name: Test WER
type: wer
value: 66.78
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: cs
metrics:
- name: Test WER
type: wer
value: 57.52
---
# Wav2Vec2-Large-XLSR-53-Czech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice 6.1
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.20 %
## Training
The Common Voice `train` and `validation` datasets were used for training
# TODO The script used for training can be found [here](...) |
congcongwang/t5-base-fine-tuned-wnut-2020-task3 | e29a4b6721c5ecb5bb0a84a9ce90d8d196d80cb7 | 2021-06-23T12:06:19.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | congcongwang | null | congcongwang/t5-base-fine-tuned-wnut-2020-task3 | 5 | null | transformers | 16,455 | Entry not found |
crabz/slovakbert-ner | 2e62fdd3f4b3a3be9139a357ac75073587a90c25 | 2021-12-02T12:51:13.000Z | [
"pytorch",
"roberta",
"token-classification",
"sk",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | crabz | null | crabz/slovakbert-ner | 5 | null | transformers | 16,456 | ---
license: mit
language:
- sk
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
inference: false
widget:
- text: "Zuzana Čaputová sa narodila 21. júna 1973 v Bratislave."
example_title: "Named Entity Recognition"
model-index:
- name: slovakbert-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: sk
metrics:
- name: Precision
type: precision
value: 0.9327115256495669
- name: Recall
type: recall
value: 0.9470124013528749
- name: F1
type: f1
value: 0.9398075632132469
- name: Accuracy
type: accuracy
value: 0.9785228256835333
---
# Named Entity Recognition based on SlovakBERT
This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on the Slovak wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1600
- Precision: 0.9327
- Recall: 0.9470
- F1: 0.9398
- Accuracy: 0.9785
## Intended uses & limitations
Supported classes: LOCATION, PERSON, ORGANIZATION
```
from transformers import pipeline
ner_pipeline = pipeline(task='ner', model='crabz/slovakbert-ner')
input_sentence = "Minister financií a líder mandátovo najsilnejšieho hnutia OĽaNO Igor Matovič upozorňuje, že následky tretej vlny budú na Slovensku veľmi veľké."
classifications = ner_pipeline(input_sentence)
```
with `displaCy`:
```
import spacy
from spacy import displacy
ner_map = {0: '0', 1: 'B-OSOBA', 2: 'I-OSOBA', 3: 'B-ORGANIZÁCIA', 4: 'I-ORGANIZÁCIA', 5: 'B-LOKALITA', 6: 'I-LOKALITA'}
entities = []
for i in range(len(classifications)):
if classifications[i]['entity'] != 0:
if ner_map[classifications[i]['entity']][0] == 'B':
j = i + 1
while j < len(classifications) and ner_map[classifications[j]['entity']][0] == 'I':
j += 1
entities.append((ner_map[classifications[i]['entity']].split('-')[1], classifications[i]['start'],
classifications[j - 1]['end']))
nlp = spacy.blank("en") # it should work with any language
doc = nlp(input_sentence)
ents = []
for ee in entities:
ents.append(doc.char_span(ee[1], ee[2], ee[0]))
doc.ents = ents
options = {"ents": ["OSOBA", "ORGANIZÁCIA", "LOKALITA"],
"colors": {"OSOBA": "lightblue", "ORGANIZÁCIA": "lightcoral", "LOKALITA": "lightgreen"}}
displacy_html = displacy.render(doc, style="ent", options=options)
```
<div class="entities" style="line-height: 2.5; direction: ltr">Minister financií a líder mandátovo najsilnejšieho hnutia
<mark class="entity" style="background: lightcoral; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
OĽaNO
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">ORGANIZÁCIA</span>
</mark>
<mark class="entity" style="background: lightblue; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Igor Matovič
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">OSOBA</span>
</mark>
upozorňuje, že následky tretej vlny budú na
<mark class="entity" style="background: lightgreen; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Slovensku
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOKALITA</span>
</mark>
veľmi veľké.</div>
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2342 | 1.0 | 625 | 0.1233 | 0.8891 | 0.9076 | 0.8982 | 0.9667 |
| 0.1114 | 2.0 | 1250 | 0.1079 | 0.9118 | 0.9269 | 0.9193 | 0.9725 |
| 0.0817 | 3.0 | 1875 | 0.1093 | 0.9173 | 0.9315 | 0.9243 | 0.9747 |
| 0.0438 | 4.0 | 2500 | 0.1076 | 0.9188 | 0.9353 | 0.9270 | 0.9743 |
| 0.028 | 5.0 | 3125 | 0.1230 | 0.9143 | 0.9387 | 0.9264 | 0.9744 |
| 0.0256 | 6.0 | 3750 | 0.1204 | 0.9246 | 0.9423 | 0.9334 | 0.9765 |
| 0.018 | 7.0 | 4375 | 0.1332 | 0.9292 | 0.9416 | 0.9353 | 0.9770 |
| 0.0107 | 8.0 | 5000 | 0.1339 | 0.9280 | 0.9427 | 0.9353 | 0.9769 |
| 0.0079 | 9.0 | 5625 | 0.1368 | 0.9326 | 0.9442 | 0.9383 | 0.9785 |
| 0.0065 | 10.0 | 6250 | 0.1490 | 0.9284 | 0.9445 | 0.9364 | 0.9772 |
| 0.0061 | 11.0 | 6875 | 0.1566 | 0.9328 | 0.9433 | 0.9380 | 0.9778 |
| 0.0031 | 12.0 | 7500 | 0.1555 | 0.9339 | 0.9473 | 0.9406 | 0.9787 |
| 0.0024 | 13.0 | 8125 | 0.1548 | 0.9349 | 0.9462 | 0.9405 | 0.9787 |
| 0.0015 | 14.0 | 8750 | 0.1562 | 0.9330 | 0.9469 | 0.9399 | 0.9788 |
| 0.0013 | 15.0 | 9375 | 0.1600 | 0.9327 | 0.9470 | 0.9398 | 0.9785 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.15.1
- Tokenizers 0.10.3
|
csalamea/roberta-base-bne-finetuned-amazon_reviews_multi | 276cfc215938f2e9a57e8a88e86cc3f34a3b0057 | 2021-09-16T01:30:02.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | csalamea | null | csalamea/roberta-base-bne-finetuned-amazon_reviews_multi | 5 | null | transformers | 16,457 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2303
- Accuracy: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1942 | 1.0 | 1250 | 0.1751 | 0.932 |
| 0.0935 | 2.0 | 2500 | 0.2303 | 0.9325 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
csikasote/wav2vec2-large-xlsr-bemba | b904cda30c91430ee9f720562bb888fd76cbe1fe | 2022-04-14T07:20:37.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"bem",
"dataset:BembaSpeech",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | csikasote | null | csikasote/wav2vec2-large-xlsr-bemba | 5 | null | transformers | 16,458 | ---
language: bem
datasets:
- BembaSpeech
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Bemba by Claytone Sikasote
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: BembaSpeech bem
type: bembaspeech
args: bem
metrics:
- name: Test WER
type: wer
value: 42.17
---
# Wav2Vec2-Large-XLSR-53-Bemba
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Bemba language of Zambia using the [BembaSpeech](https://csikasote.github.io/BembaSpeech). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\t")["test"] # Adapt the path to test.csv
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
#BembaSpeech is sample at 16kHz so we you do not need to resample
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Bemba test data of BembaSpeech.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter="\\t")["test"]
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model = Wav2Vec2ForCTC.from_pretrained("csikasote/wav2vec2-large-xlsr-bemba")
model.to("cuda")
chars_to_ignore_regex = '[\,\_\?\.\!\;\:\"\“]'
#resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array.squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.17 %
## Training
The BembaSpeech `train`, `dev` and `test` datasets were used for training, development and evaluation respectively. The script used for evaluating the model on the test dataset can be found [here](https://colab.research.google.com/drive/1aplFHfaXE68HGDwBYV2KqUWPasrk7bXv?usp=sharing).
|
cstorm125/marianmt-zh_cn-th | c267114fa797e61da53ec00e5f3dbc6d70660b0e | 2021-06-23T14:19:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"torch==1.8.0",
"autotrain_compatible"
]
| translation | false | cstorm125 | null | cstorm125/marianmt-zh_cn-th | 5 | null | transformers | 16,459 | ---
tags:
- translation
- torch==1.8.0
widget:
- text: "Inference Unavailable"
---
### marianmt-zh_cn-th
* source languages: zh_cn
* target languages: th
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set translations:
* test set scores:
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-zh_cn-th](https://wandb.ai/cstorm125/marianmt-zh_cn-th).
```
export WANDB_PROJECT=marianmt-zh_cn-th
python train_model.py --input_fname ../data/v1/Train.csv \
\\t--output_dir ../models/marianmt-zh_cn-th \
\\t--source_lang zh --target_lang th \
\\t--metric_tokenize th_syllable --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("cstorm125/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("cstorm125/marianmt-zh_cn-th").cpu()
src_text = [
'我爱你',
'我想吃米饭',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['ผมรักคุณนะ', 'ฉันอยากกินข้าว']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` |
cuongngm/layoutlm-bill | ef55d565f5c7771ccf4c878c9f63cc5b237a95f7 | 2022-02-17T09:45:03.000Z | [
"pytorch",
"layoutlmv2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | cuongngm | null | cuongngm/layoutlm-bill | 5 | null | transformers | 16,460 | Fine tuning LayoutLMv2 model on Vietnamese bill dataset
```python
from transformers import LayoutLMv2ForTokenClassification
model = LayoutLMv2ForTokenClassification.from_pretrained('cuongngm/layoutlm-bill', num_labels=len(labels))
```
labels = ['price',
'storename',
'total_cost',
'phone',
'address',
'unitprice',
'item',
'subitem',
'other',
'time',
'unit',
'total refunds',
'total_qty',
'seller',
'total_received'] |
cuongtran/BARTTextSummarization | 4339654eea2152fefa61b5e6b10a17686d04fa43 | 2021-10-13T03:39:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | cuongtran | null | cuongtran/BARTTextSummarization | 5 | null | transformers | 16,461 | Entry not found |
damien-ir/kosentelectra-discriminator-v3 | bdeffcce06c00f22ef2648852c53f3b0f6714c91 | 2020-09-29T07:49:37.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | damien-ir | null | damien-ir/kosentelectra-discriminator-v3 | 5 | null | transformers | 16,462 | Entry not found |
damien-ir/kosentelectra-discriminator-v4 | 32b09a714c960216316ef2004ce3ea5afa781435 | 2020-09-29T07:53:29.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| null | false | damien-ir | null | damien-ir/kosentelectra-discriminator-v4 | 5 | null | transformers | 16,463 | Entry not found |
damien-ir/kosentelectra-generator-v1 | 31a60da3aee67a7388c4f9a610e4063958aff630 | 2020-09-29T07:42:45.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | damien-ir | null | damien-ir/kosentelectra-generator-v1 | 5 | null | transformers | 16,464 | Entry not found |
damien-ir/kosentelectra-generator-v2 | 210244856ec95dbc3382d4fd7adcb67cee24a80c | 2020-09-15T09:14:59.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | damien-ir | null | damien-ir/kosentelectra-generator-v2 | 5 | null | transformers | 16,465 | Entry not found |
damien-ir/kosentelectra-generator-v5 | bbe09b1ec8e5b5b0b2bac891d72d5c0579ddfd5d | 2020-09-29T07:57:32.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | damien-ir | null | damien-ir/kosentelectra-generator-v5 | 5 | null | transformers | 16,466 | Entry not found |
damlab/HIV_V3_bodysite | b61b611cfb2f6d6622792079e7e55d0531d04fd6 | 2022-02-24T19:18:26.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:damlab/HIV_V3_bodysite",
"transformers"
]
| text-classification | false | damlab | null | damlab/HIV_V3_bodysite | 5 | null | transformers | 16,467 | ---
licence: mit
widget:
- text: "T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
example_title: "V3 Macrophage"
- text: 'C T R P N N N T R K S I H I G P G R A F Y T T G Q I I G D I R Q A Y C'
example_title: "V3 T-cell"
datasets:
- damlab/HIV_V3_bodysite
metrics:
- accuracy
---
# Model Card for [HIV_V3_bodysite]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT-Bodysite-Identification model was trained as a refinement of the HIV-BERT model (insert link) and serves to better predict the location that an HIV V3 loop sample was derived from. HIV-BERT is a model refined from the ProtBert-BFD model (https://huggingface.co/Rostlab/prot_bert_bfd) to better fulfill HIV-centric tasks. This model was then trained using HIV V3 sequences from the Los Alamos HIV Sequence Database (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html), allowing even more precise prediction of body site location than the HIV-BERT model can provide.
## Model Description
The HIV-BERT-Bodysite-Identification model is intended to predict the location as to where an HIV sequence was most likely derived from. Because HIV infects immune cells, it uses these as a means of rapidly spreading throughout the body. Thus, body site identification can help determine where exactly these HIV particles ultimately end up. This would be helpful when attempting to study HIV treatment strategies. When provided with an HIV genomic sequence, the HIV-BERT-Bodysite-Identification model can predict which tissue it was derived from.
## Intended Uses & Limitations
This tool can be used as a predictor of which body site an HIV sample was derived from based on its genomic sequence. It should not be considered a clinical diagnostic tool.
This tool was trained using the Los Alamos HIV sequence dataset (https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D. Currently, there was no effort made to balance the performance across these classes. As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## How to use
This model is able to predict the likely bodysite from a V3 sequence.
This may be use for surveillance of cells that are emerging from latent reservoirs.
Remember, a sequence can come from multiple sites, they are not mutually exclusive.
```python
from transformers import pipeline
predictor = pipeline("text-classification", model="damlab/HIV_V3_bodysite")
predictor(f"C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C")
[
[
{
"label": "periphery-tcell",
"score": 0.29097115993499756
},
{
"label": "periphery-monocyte",
"score": 0.014322502538561821
},
{
"label": "CNS",
"score": 0.06870711594820023
},
{
"label": "breast-milk",
"score": 0.002785981632769108
},
{
"label": "female-genitals",
"score": 0.024997007101774216
},
{
"label": "male-genitals",
"score": 0.01040483545511961
},
{
"label": "gastric",
"score": 0.06872137635946274
},
{
"label": "lung",
"score": 0.04432062804698944
},
{
"label": "organ",
"score": 0.47476938366889954
}
]
]
```
## Training Data
This model was trained using the damlab/HIV_V3_bodysite dataset using the 0th fold. The dataset consists of 5510 sequences (approximately 35 tokens each) extracted from the Los Alamos HIV Sequence database.
## Training Procedure
### Preprocessing
As with the rostlab/Prot-bert-bfd model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
The damlab/HIV-BERT model was used as the initial weights for an AutoModelforClassificiation. The model was trained with a learning rate of 1E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset. As this is a multiple classification task (a protein can be found in multiple sites) the loss was calculated as the Binary Cross Entropy for each category. The BCE was weighted by the inverse of the class ratio to balance the weight across the class imbalance.
## Evaluation Results
*Need to add*
## BibTeX Entry and Citation Info
[More Information Needed]
|
danlou/aristo-roberta-finetuned-csqa | 9dd69c0ddef1db6cc62c228158d95aceeb6a815e | 2021-07-23T14:33:00.000Z | [
"pytorch",
"roberta",
"multiple-choice",
"dataset:commonsense_qa",
"transformers",
"generated_from_trainer",
"license:mit"
]
| multiple-choice | false | danlou | null | danlou/aristo-roberta-finetuned-csqa | 5 | 1 | transformers | 16,468 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- commonsense_qa
metrics:
- accuracy
model_index:
- name: aristo-roberta-finetuned-csqa
results:
- dataset:
name: commonsense_qa
type: commonsense_qa
args: default
metric:
name: Accuracy
type: accuracy
value: 0.7305487394332886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aristo-roberta-finetuned-csqa
This model is a fine-tuned version of [LIAMF-USP/aristo-roberta](https://huggingface.co/LIAMF-USP/aristo-roberta) on the commonsense_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2187
- Accuracy: 0.7305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.131 | 1.0 | 609 | 0.7109 | 0.7232 |
| 0.6957 | 2.0 | 1218 | 0.6912 | 0.7346 |
| 0.459 | 3.0 | 1827 | 0.8364 | 0.7305 |
| 0.3063 | 4.0 | 2436 | 1.0595 | 0.7322 |
| 0.2283 | 5.0 | 3045 | 1.2187 | 0.7305 |
### Framework versions
- Transformers 4.9.0
- Pytorch 1.9.0
- Datasets 1.10.2
- Tokenizers 0.10.3
|
danlou/distilbert-base-uncased-finetuned-cola | e9c81d3c830ec6d51add46db7677f5206bded717 | 2021-12-30T23:39:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | danlou | null | danlou/distilbert-base-uncased-finetuned-cola | 5 | null | transformers | 16,469 | Entry not found |
danwilbury/xlm-roberta-base-finetuned-marc-en | f5686202cebdc17f4762bfacf08cf0d51169081a | 2021-10-22T13:04:48.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | danwilbury | null | danwilbury/xlm-roberta-base-finetuned-marc-en | 5 | null | transformers | 16,470 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9302
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9756 | 0.5488 |
| 0.9465 | 2.0 | 470 | 0.9302 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
dbernsohn/roberta-php | ad623d45562372c2d8ced55dc07fe0376226dc73 | 2021-05-20T15:56:10.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"php",
"dataset:code_search_net",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | dbernsohn | null | dbernsohn/roberta-php | 5 | 1 | transformers | 16,471 | # roberta-php
---
language: php
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **php** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-php")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-php")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
$people = array(
array('name' => 'Kalle', 'salt' => 856412),
array('name' => 'Pierre', 'salt' => 215863)
);
for($i = 0; $i < count($<mask>); ++$i) {
$people[$i]['salt'] = mt_rand(000000, 999999);
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('people', 0.785636842250824),
# ('parts', 0.006270722020417452),
# ('id', 0.0035842324141412973),
# ('data', 0.0025512021966278553),
# ('config', 0.002258970635011792)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/) |
dbmdz/bert-medium-historic-multilingual-cased | 99f676211e6698bcbb4fff1613333b11d35a571e | 2021-12-06T14:35:44.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | dbmdz | null | dbmdz/bert-medium-historic-multilingual-cased | 5 | null | transformers | 16,472 | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko | 41dc5c541480e2797e1646bcf568d654cbb107da | 2021-12-22T06:51:44.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
]
| null | false | ddobokki | null | ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko | 5 | 3 | transformers | 16,473 | ## EXAMPLE
```python
import requests
import torch
from PIL import Image
from transformers import (
VisionEncoderDecoderModel,
ViTFeatureExtractor,
PreTrainedTokenizerFast,
)
# device setting
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load feature extractor and tokenizer
encoder_model_name_or_path = "ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko"
feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_model_name_or_path)
tokenizer = PreTrainedTokenizerFast.from_pretrained(encoder_model_name_or_path)
# load model
model = VisionEncoderDecoderModel.from_pretrained(encoder_model_name_or_path)
model.to(device)
# inference
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values.to(device),num_beams=5)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
>> ['고양이 두마리가 담요 위에 누워 있다.']
```
|
deepdml/output | de65a85446b3e686b8c3fd99e36c60e90d58b466 | 2022-01-21T11:50:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | deepdml | null | deepdml/output | 5 | null | transformers | 16,474 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
deeq/dbert5 | 66e16cdb292c24d1809685fbdb511b58c79d66f8 | 2021-06-08T05:14:14.000Z | [
"pytorch",
"transformers"
]
| null | false | deeq | null | deeq/dbert5 | 5 | null | transformers | 16,475 | deeqBERT5
---
- model: bert-base
- vocab: deeqnlp 1.5, 50k
- version: latest/3.5
|
diegozs97/finetuned-chemprot-seed-0-1000k | e10c376c10b4794775ba099a3989c149f003fd15 | 2021-12-07T05:14:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-1000k | 5 | null | transformers | 16,476 | Entry not found |
diegozs97/finetuned-chemprot-seed-0-1500k | 5e5586d4c7a35a68d75039904e5e62a9d2f5571b | 2021-12-07T05:15:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-0-1500k | 5 | null | transformers | 16,477 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-0k | 18ce42a4c06336912df2fee9ef73ba041552cc74 | 2021-12-07T05:17:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-0k | 5 | null | transformers | 16,478 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-1000k | 5ff4d903b2b5bc510ccbd0429d2d9f10f94d2885 | 2021-12-07T05:24:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-1000k | 5 | null | transformers | 16,479 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-100k | 36a8a7436dbb52f50282f7ac0a679b0820f6f422 | 2021-12-07T05:20:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-100k | 5 | null | transformers | 16,480 | Entry not found |
diegozs97/finetuned-chemprot-seed-3-0k | 2b3cc2ceae59ded05cdf564505443b25aea9e5e1 | 2021-12-09T18:00:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-3-0k | 5 | null | transformers | 16,481 | Entry not found |
diegozs97/finetuned-chemprot-seed-3-100k | 1792987b66b08242a45b0b7a2e40d9eff3f47e8e | 2021-12-09T18:03:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-3-100k | 5 | null | transformers | 16,482 | Entry not found |
diegozs97/finetuned-chemprot-seed-3-2000k | 9118474a4867f9b449304d39f9d4a2bb0b56dda0 | 2021-12-09T18:12:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-3-2000k | 5 | null | transformers | 16,483 | Entry not found |
diegozs97/finetuned-chemprot-seed-4-400k | ffa8eb0c80d8c926ebd6ae70f6967ae4ee3c2b52 | 2021-12-09T18:17:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-4-400k | 5 | null | transformers | 16,484 | Entry not found |
diegozs97/finetuned-sciie-seed-0-1000k | c03fbc484d9332e2f4ec8fc2dc167695a3cbcce6 | 2021-12-10T01:46:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-1000k | 5 | null | transformers | 16,485 | Entry not found |
diegozs97/finetuned-sciie-seed-0-100k | 3ac82f144a0432b8fa2b21a3e9940b835dc57e90 | 2021-12-10T01:42:29.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-100k | 5 | null | transformers | 16,486 | Entry not found |
diegozs97/finetuned-sciie-seed-0-2000k | 1b486bfd6880b893d1573fa6d82cd3bbe576576e | 2021-12-10T01:48:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-2000k | 5 | null | transformers | 16,487 | Entry not found |
diegozs97/finetuned-sciie-seed-0-200k | 36ef430b0a1ae46ea7ff8562b3de2b74a18774f0 | 2021-12-10T01:43:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-200k | 5 | null | transformers | 16,488 | Entry not found |
diegozs97/finetuned-sciie-seed-0-20k | 6b640ea75c45598a4c9db12bec4a03801c883c74 | 2021-12-10T01:40:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-20k | 5 | null | transformers | 16,489 | Entry not found |
diegozs97/finetuned-sciie-seed-0-400k | 95652147904db597ceca173916c6d11bd92d2f56 | 2021-12-10T01:44:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-0-400k | 5 | null | transformers | 16,490 | Entry not found |
diegozs97/finetuned-sciie-seed-1-1000k | ad7071d7aba5c9d71c366715cf76444e049f1121 | 2021-12-07T15:32:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-1-1000k | 5 | null | transformers | 16,491 | Entry not found |
diegozs97/finetuned-sciie-seed-1-1800k | c3972ff71276e94c552a817820a55dc6eb91bad0 | 2021-12-07T15:34:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-1-1800k | 5 | null | transformers | 16,492 | Entry not found |
diegozs97/finetuned-sciie-seed-1-400k | e120e157f6d599d87b261de3e43a6a657e5b54c6 | 2021-12-07T15:30:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-1-400k | 5 | null | transformers | 16,493 | Entry not found |
diegozs97/finetuned-sciie-seed-1-60k | c6f412a2b8138c75ffa145c2150a7c610d615802 | 2021-12-07T15:27:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-1-60k | 5 | null | transformers | 16,494 | Entry not found |
diegozs97/finetuned-sciie-seed-1-700k | 743c07b8d07771d8a8d2b8b121d4710d08c37091 | 2021-12-07T15:31:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-1-700k | 5 | null | transformers | 16,495 | Entry not found |
diegozs97/finetuned-sciie-seed-2-1000k | 60ed0ff4621e968172d9170ca6941c00f8ad437b | 2021-12-07T15:42:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-2-1000k | 5 | null | transformers | 16,496 | Entry not found |
diegozs97/finetuned-sciie-seed-2-2000k | 1a97c4501444b1ab37cf1d3327f243a2e4416f12 | 2021-12-07T15:44:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-2-2000k | 5 | null | transformers | 16,497 | Entry not found |
diegozs97/finetuned-sciie-seed-2-60k | c13e066f0bebd650d2ca0c0f47d13121e68a7984 | 2021-12-07T15:37:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-2-60k | 5 | null | transformers | 16,498 | Entry not found |
diegozs97/finetuned-sciie-seed-3-0k | db09bec7551a50e23d83d0d760c9607235168443 | 2021-12-08T04:30:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-sciie-seed-3-0k | 5 | null | transformers | 16,499 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.