modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Rohan-Kurdekar/Arabic_Bert_Model | d274ea8aef70da81277d31187703518d23b7805c | 2021-05-20T12:21:38.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Rohan-Kurdekar | null | Rohan-Kurdekar/Arabic_Bert_Model | 11 | null | transformers | 11,000 | Entry not found |
SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune | 4792abf3e80fb276e50806249fbb97c6c3512dc4 | 2021-06-23T05:50:50.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune | 11 | null | transformers | 11,001 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/api%20generation/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_commit_generation | e12a8bd060251516b9c12b9ff4c942f83e1a9e2f | 2021-06-23T10:14:01.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_commit_generation | 11 | null | transformers | 11,002 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Git Commit Message Generation dataset.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/commit%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_multitask_en_it | 5c41bebe0405e0d3c866d8a6c4978715f5cb427e | 2021-06-23T11:00:22.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Italian",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Italian model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_en_it | 11 | null | transformers | 11,003 |
---
language: English Italian
tags:
- translation English Italian model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "WRITTEN QUESTION E-1184/07"
---
# legal_t5_small_multitask_en_it model
Model on translating legal text from English to Italian. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_it model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Italian.
### How to use
Here is how to use this model to translate legal text from English to Italian in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_it"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_it", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "WRITTEN QUESTION E-1184/07"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_it model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_it | 47.070|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SergeiGKS/camembert-base-wikipedia-4gb-finetuned-job-ner | ae4a2506da4e4d146e97d9b1c480932b24fba41d | 2021-12-14T13:24:57.000Z | [
"pytorch",
"tensorboard",
"camembert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | SergeiGKS | null | SergeiGKS/camembert-base-wikipedia-4gb-finetuned-job-ner | 11 | null | transformers | 11,004 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: camembert-base-wikipedia-4gb-finetuned-job-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-wikipedia-4gb-finetuned-job-ner
This model is a fine-tuned version of [camembert/camembert-base-wikipedia-4gb](https://huggingface.co/camembert/camembert-base-wikipedia-4gb) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0435
- Precision: 0.9134
- Recall: 0.9197
- F1: 0.9165
- Accuracy: 0.9873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0463 | 1.0 | 7543 | 0.0435 | 0.9134 | 0.9197 | 0.9165 | 0.9873 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT | 71a293ac3a207ce25c3b1add8cbf32ecbc216082 | 2022-01-16T15:54:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Shushant | null | Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT | 11 | null | transformers | 11,005 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.9518 |
| No log | 2.0 | 44 | 3.2703 |
| No log | 3.0 | 66 | 2.9308 |
| No log | 4.0 | 88 | 2.7806 |
| No log | 5.0 | 110 | 2.6926 |
| No log | 6.0 | 132 | 2.7043 |
| No log | 7.0 | 154 | 2.7113 |
| No log | 8.0 | 176 | 2.7236 |
| No log | 9.0 | 198 | 2.7559 |
| No log | 10.0 | 220 | 2.7515 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Sindhu/muril-large-squad2 | 1d998e51bc18d19cb55d7b6c54535caa7ec98089 | 2021-11-20T09:43:56.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Sindhu | null | Sindhu/muril-large-squad2 | 11 | null | transformers | 11,006 | # Muril Large Squad2
This model is finetuned for QA task on Squad2 from [Muril Large checkpoint](https://huggingface.co/google/muril-large-cased).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = google/muril-large-cased
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 82.0180240882675,
"f1": 85.10110304685352,
"total": 11873,
"HasAns_exact": 81.6970310391363,
"HasAns_f1": 87.87203044454981,
"HasAns_total": 5928,
"NoAns_exact": 82.3380992430614,
"NoAns_f1": 82.3380992430614,
"NoAns_total": 5945
}
```
## Limitations
MuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man) |
StivenLancheros/spanberta-base-cased-ner-conll02-finetuned-ner | a66639a2cd9307ffc8f046c038d62ec275025cc9 | 2021-11-07T11:32:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/spanberta-base-cased-ner-conll02-finetuned-ner | 11 | null | transformers | 11,007 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spanberta-base-cased-ner-conll02-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.911773494695951
- name: Recall
type: recall
value: 0.9149861308943699
- name: F1
type: f1
value: 0.9133769878391019
- name: Accuracy
type: accuracy
value: 0.9803183888541573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanberta-base-cased-ner-conll02-finetuned-ner
This model is a fine-tuned version of [skimai/spanberta-base-cased-ner-conll02](https://huggingface.co/skimai/spanberta-base-cased-ner-conll02) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Precision: 0.9118
- Recall: 0.9150
- F1: 0.9134
- Accuracy: 0.9803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2641 | 1.0 | 878 | 0.0923 | 0.8818 | 0.8802 | 0.8810 | 0.9739 |
| 0.0648 | 2.0 | 1756 | 0.0817 | 0.9033 | 0.9044 | 0.9038 | 0.9785 |
| 0.0314 | 3.0 | 2634 | 0.0824 | 0.9118 | 0.9150 | 0.9134 | 0.9803 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
TehranNLP/bert-base-cased-mnli | 31df249aed67164640da2676afbdd73dc39f5d37 | 2021-06-03T09:18:49.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP | null | TehranNLP/bert-base-cased-mnli | 11 | null | transformers | 11,008 | Entry not found |
Vivek/GPT2_GSM8k | 204965a60b16ed3518b77a64216bd8a58713f613 | 2021-11-29T15:27:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Vivek | null | Vivek/GPT2_GSM8k | 11 | null | transformers | 11,009 | Entry not found |
Wikidepia/albert-bahasa-uncased-squad | 73eceff56a054d24b0e68cfe603e0cc95934a0b7 | 2021-01-11T01:39:05.000Z | [
"pytorch",
"albert",
"question-answering",
"id",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Wikidepia | null | Wikidepia/albert-bahasa-uncased-squad | 11 | null | transformers | 11,010 | ---
language: id
inference: false
---
# SQuAD IndoBERT-Lite Base Model
Fine-tuned IndoBERT-Lite from IndoBenchmark using Translated SQuAD datasets.
## How to use
### Using pipeline
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/albert-bahasa-uncased-squad'
)
nlp = pipeline('question-answering', model="Wikidepia/albert-bahasa-uncased-squad", tokenizer=tokenizer)
QA_input = {
'question': 'Kapan orang Normandia berada di Normandia?',
'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) adalah orang-orang yang pada abad ke-10 dan ke-11 memberikan nama mereka ke Normandia, sebuah wilayah di Prancis. Mereka adalah keturunan dari Norse (\ "Norman \" berasal dari \ "Norseman \") perampok dan perompak dari Denmark, Islandia dan Norwegia yang, di bawah pemimpin mereka Rollo, setuju untuk bersumpah setia kepada Raja Charles III dari Francia Barat. Melalui generasi asimilasi dan pencampuran dengan penduduk asli Franka dan Romawi-Gaul, keturunan mereka secara bertahap akan bergabung dengan budaya Francia Barat yang berbasis di Karoling. Identitas budaya dan etnis orang Normandia yang berbeda awalnya muncul pada paruh pertama abad ke-10, dan terus berkembang selama abad-abad berikutnya.'
}
res = nlp(QA_input)
print(res)
```
|
aadelucia/GPT2_medium_narrative_finetuned_medium | 13f6bed7e075f31e10ec2c3105829bf486ebe588 | 2021-12-10T17:44:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | aadelucia | null | aadelucia/GPT2_medium_narrative_finetuned_medium | 11 | null | transformers | 11,011 | Please visit the repo for training details. https://github.com/AADeLucia/gpt2-narrative-decoding |
aditeyabaral/additionalpretrained-bert-hinglish-small | adf89296ea3c84bd2a3012d85bda8e5f20063a07 | 2021-10-20T18:26:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | aditeyabaral | null | aditeyabaral/additionalpretrained-bert-hinglish-small | 11 | null | transformers | 11,012 | Entry not found |
adityavithaldas/distilbert-base-uncased-finetuned-ner | 17e3fefebe62c7f30f8b8c4206985d3cc4814e8f | 2021-09-22T19:33:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | adityavithaldas | null | adityavithaldas/distilbert-base-uncased-finetuned-ner | 11 | 1 | transformers | 11,013 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
adresgezgini/Turkish-GPT-2-Finetuned_digital_ads | 3974533f30de8a228353e783f3b8959ca37a5a17 | 2021-05-21T11:52:06.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | adresgezgini | null | adresgezgini/Turkish-GPT-2-Finetuned_digital_ads | 11 | null | transformers | 11,014 | Entry not found |
airKlizz/distilbart-3-3-multi-combine-wiki-news | c364b739fc4e901eb85ee087c804f07fc4c073cb | 2020-08-21T12:24:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/distilbart-3-3-multi-combine-wiki-news | 11 | null | transformers | 11,015 | Entry not found |
airKlizz/mt5-base-germeval21-toxic-with-task-specific-pretraining-and-data-augmentation | 7a68d94637e5e7878a120f44bdfb6ff3b66698fe | 2021-07-12T16:04:43.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/mt5-base-germeval21-toxic-with-task-specific-pretraining-and-data-augmentation | 11 | null | transformers | 11,016 | Entry not found |
airKlizz/mt5-base-wikinewssum-portuguese | e3fc8ed890f93f8cf6ab2ed1dea305308ef101c9 | 2021-12-26T08:03:49.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | airKlizz | null | airKlizz/mt5-base-wikinewssum-portuguese | 11 | null | transformers | 11,017 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-portuguese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-portuguese
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0428
- Rouge1: 9.4966
- Rouge2: 4.2224
- Rougel: 7.9845
- Rougelsum: 8.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 334 | 2.2258 | 7.3686 | 2.9066 | 6.3167 | 6.8758 |
| No log | 2.0 | 668 | 2.1389 | 9.0551 | 3.8395 | 7.6578 | 8.4641 |
| No log | 3.0 | 1002 | 2.1030 | 9.2792 | 3.9352 | 7.8259 | 8.663 |
| No log | 4.0 | 1336 | 2.0841 | 9.337 | 4.0647 | 7.8662 | 8.693 |
| 3.2831 | 5.0 | 1670 | 2.0487 | 9.4244 | 4.0821 | 7.8633 | 8.7111 |
| 3.2831 | 6.0 | 2004 | 2.0580 | 9.4598 | 4.1598 | 7.9511 | 8.8299 |
| 3.2831 | 7.0 | 2338 | 2.0426 | 9.501 | 4.1885 | 7.9803 | 8.8612 |
| 3.2831 | 8.0 | 2672 | 2.0428 | 9.4966 | 4.2224 | 7.9845 | 8.8641 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining-and-data-augmentation | be93cd64989d3b0c9d4dd6516dd477f4156aef23 | 2021-07-12T15:01:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | airKlizz | null | airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining-and-data-augmentation | 11 | null | transformers | 11,018 | Entry not found |
allenai/dsp_roberta_base_tapt_rct_500 | def5439c85ce5f29981f5868a21c452b31956c92 | 2021-05-20T13:32:43.000Z | [
"pytorch",
"jax",
"roberta",
"transformers"
]
| null | false | allenai | null | allenai/dsp_roberta_base_tapt_rct_500 | 11 | null | transformers | 11,019 | Entry not found |
allenai/longformer-large-4096-extra.pos.embd.only | 3d7ed69023d5e6e9df4454011950d1a199666ef0 | 2021-03-10T02:32:43.000Z | [
"pytorch",
"tf",
"longformer",
"transformers"
]
| null | false | allenai | null | allenai/longformer-large-4096-extra.pos.embd.only | 11 | null | transformers | 11,020 | Entry not found |
anirudh21/bert-base-uncased-finetuned-qnli | ebb68e464834a6340176695880c746b19d057ff7 | 2022-01-27T08:21:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anirudh21 | null | anirudh21/bert-base-uncased-finetuned-qnli | 11 | null | transformers | 11,021 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.791689547867472
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6268
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 0.5339 | 0.7620 |
| No log | 2.0 | 126 | 0.4728 | 0.7866 |
| No log | 3.0 | 189 | 0.5386 | 0.7847 |
| No log | 4.0 | 252 | 0.6096 | 0.7904 |
| No log | 5.0 | 315 | 0.6268 | 0.7917 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
annafavaro/bert-base-uncased-finetuned-addresso | 86e360532bd486d2f54a5ea7e577751e6580ad3a | 2021-12-03T23:48:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | annafavaro | null | annafavaro/bert-base-uncased-finetuned-addresso | 11 | null | transformers | 11,022 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-addresso
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-addresso
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
anthonymirand/haha_2019_adaptation_task | b2aeba7ee0ad1724511327d98f673c3b485a56ee | 2021-05-30T21:16:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | anthonymirand | null | anthonymirand/haha_2019_adaptation_task | 11 | null | transformers | 11,023 | Entry not found |
anton-l/sew-mid-100k-ft-keyword-spotting | 68e0aa1aa0e2b91f33be1472a8d9a641b927e49d | 2022-01-26T14:43:39.000Z | [
"pytorch",
"tensorboard",
"sew",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anton-l | null | anton-l/sew-mid-100k-ft-keyword-spotting | 11 | null | transformers | 11,024 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: sew-mid-100k-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-ft-keyword-spotting
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0975
- Accuracy: 0.9757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5999 | 1.0 | 399 | 0.2262 | 0.9635 |
| 0.4271 | 2.0 | 798 | 0.1230 | 0.9697 |
| 0.3778 | 3.0 | 1197 | 0.1052 | 0.9731 |
| 0.3227 | 4.0 | 1596 | 0.0975 | 0.9757 |
| 0.3081 | 5.0 | 1995 | 0.0962 | 0.9753 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
anton-l/wav2vec2-base-keyword-spotting | b1943623613298087ca1cc4a0fe68dd4ee5277ec | 2021-09-29T16:28:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anton-l | null | anton-l/wav2vec2-base-keyword-spotting | 11 | null | transformers | 11,025 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0746
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8279 | 1.0 | 399 | 0.6792 | 0.8558 |
| 0.2961 | 2.0 | 798 | 0.1383 | 0.9798 |
| 0.2069 | 3.0 | 1197 | 0.0972 | 0.9809 |
| 0.1757 | 4.0 | 1596 | 0.0843 | 0.9825 |
| 0.1607 | 5.0 | 1995 | 0.0746 | 0.9843 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
aravind-812/roberta-train-json | a2d3b345e2a51cf7d089425ffde55ff24c9c3981 | 2021-05-20T14:12:53.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aravind-812 | null | aravind-812/roberta-train-json | 11 | null | transformers | 11,026 | ---
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
- text: "How many square kilometers of rainforest is covered in the basin?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." |
arnolfokam/bert-base-uncased-kin | 156043cdfa14ecf2bbdd99f68e2b43e04b805476 | 2021-11-24T11:07:08.000Z | [
"pytorch",
"bert",
"token-classification",
"kin",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | arnolfokam | null | arnolfokam/bert-base-uncased-kin | 11 | null | transformers | 11,027 | ---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi."
---
# Model description
**bert-base-uncased-kin** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-kin**| 75.00 |80.09|77.47
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
``` |
ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa | 0b25a99d2d40c3fa900e0d5c487752c045ae1bf7 | 2021-12-22T16:23:37.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | ayameRushia | null | ayameRushia/indobert-base-uncased-finetuned-indonlu-smsa | 11 | null | transformers | 11,028 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: indobert-base-uncased-finetuned-indonlu-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9301587301587302
- name: F1
type: f1
value: 0.9066105299178986
- name: Precision
type: precision
value: 0.8992078788375845
- name: Recall
type: recall
value: 0.9147307323234121
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-base-uncased-finetuned-indonlu-smsa
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2277
- Accuracy: 0.9302
- F1: 0.9066
- Precision: 0.8992
- Recall: 0.9147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 344 | 0.3831 | 0.8476 | 0.7715 | 0.7817 | 0.7627 |
| 0.4167 | 2.0 | 688 | 0.2809 | 0.8905 | 0.8406 | 0.8699 | 0.8185 |
| 0.2624 | 3.0 | 1032 | 0.2254 | 0.9230 | 0.8842 | 0.9004 | 0.8714 |
| 0.2624 | 4.0 | 1376 | 0.2378 | 0.9238 | 0.8797 | 0.9180 | 0.8594 |
| 0.1865 | 5.0 | 1720 | 0.2277 | 0.9302 | 0.9066 | 0.8992 | 0.9147 |
| 0.1217 | 6.0 | 2064 | 0.2444 | 0.9262 | 0.8981 | 0.9013 | 0.8957 |
| 0.1217 | 7.0 | 2408 | 0.2985 | 0.9286 | 0.8999 | 0.9035 | 0.8971 |
| 0.0847 | 8.0 | 2752 | 0.3397 | 0.9278 | 0.8969 | 0.9090 | 0.8871 |
| 0.0551 | 9.0 | 3096 | 0.3542 | 0.9270 | 0.8961 | 0.9010 | 0.8924 |
| 0.0551 | 10.0 | 3440 | 0.3862 | 0.9222 | 0.8895 | 0.8970 | 0.8846 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
baffo32/gpt2-ptmap | e1509c6a886f78039ee944060e2a04ef4b86e7f9 | 2021-12-24T13:45:44.000Z | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"en",
"transformers",
"exbert",
"license:mit"
]
| text-generation | false | baffo32 | null | baffo32/gpt2-ptmap | 11 | null | transformers | 11,029 | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
baykenney/bert-base-gpt2detector-topp96 | b1f7c7588100e58f2a68ce02d1da24e2724c4e2b | 2021-05-19T12:12:07.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | baykenney | null | baykenney/bert-base-gpt2detector-topp96 | 11 | null | transformers | 11,030 | Entry not found |
bhavikardeshna/xlm-roberta-base-hindi | 414a7949deb43cf5e3c828359b26c44b2dca3467 | 2021-12-21T11:40:15.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-hindi | 11 | null | transformers | 11,031 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/xlm-roberta-base-spanish | 25912d55fd381e4b2d199dcdae3bd9422e898f88 | 2021-12-21T11:39:52.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-spanish | 11 | null | transformers | 11,032 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhavikardeshna/xlm-roberta-base-vietnamese | 98a34fe48206b64acaac2e419ef3273cbc7a3d3e | 2021-12-21T11:39:18.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"arxiv:2112.09866",
"transformers",
"autotrain_compatible"
]
| question-answering | false | bhavikardeshna | null | bhavikardeshna/xlm-roberta-base-vietnamese | 11 | null | transformers | 11,033 | # BibTeX entry and citation info
```
@misc{pandya2021cascading,
title={Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages},
author={Hariom A. Pandya and Bhavik Ardeshna and Dr. Brijesh S. Bhatt},
year={2021},
eprint={2112.09866},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bhuvaneswari/t5-small-text_summarization | 4fc3afa436f85dd36a0c557ca4fd92d0742852f7 | 2021-11-15T04:29:51.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | bhuvaneswari | null | bhuvaneswari/t5-small-text_summarization | 11 | null | transformers | 11,034 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-text_summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-text_summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4591
- Rouge1: 28.6917
- Rouge2: 7.976
- Rougel: 22.6383
- Rougelsum: 22.6353
- Gen Len: 18.8185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7006 | 1.0 | 8162 | 2.4591 | 28.6917 | 7.976 | 22.6383 | 22.6353 | 18.8185 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
boychaboy/MNLI_roberta-base | 75af53aa738071d3d96276741961022fe43b9078 | 2021-05-20T14:31:05.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | boychaboy | null | boychaboy/MNLI_roberta-base | 11 | null | transformers | 11,035 | Entry not found |
brunodorneles/biobertpt-all-finetuned-ner | 455b762a4b98b7250b04fd1ad9252f62b0474bcf | 2021-11-03T14:40:02.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brunodorneles | null | brunodorneles/biobertpt-all-finetuned-ner | 11 | null | transformers | 11,036 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobertpt-all-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobertpt-all-finetuned-ner
This model is a fine-tuned version of [pucpr/biobertpt-all](https://huggingface.co/pucpr/biobertpt-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3721
- Precision: 0.0179
- Recall: 0.0149
- F1: 0.0163
- Accuracy: 0.6790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 1 | 2.7864 | 0.0091 | 0.0448 | 0.0152 | 0.3339 |
| No log | 2.0 | 2 | 2.5096 | 0.0097 | 0.0149 | 0.0118 | 0.6292 |
| No log | 3.0 | 3 | 2.3721 | 0.0179 | 0.0149 | 0.0163 | 0.6790 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
cahya/bert2bert-indonesian-summarization | 2be9212d2b3fb688c461406853bebf54715d635f | 2021-01-29T11:39:42.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"id",
"dataset:id_liputan6",
"transformers",
"pipeline:summarization",
"summarization",
"bert2bert",
"license:apache-2.0",
"autotrain_compatible"
]
| summarization | false | cahya | null | cahya/bert2bert-indonesian-summarization | 11 | 1 | transformers | 11,037 | ---
language: id
tags:
- pipeline:summarization
- summarization
- bert2bert
datasets:
- id_liputan6
license: apache-2.0
---
# Indonesian BERT2BERT Summarization Model
Finetuned BERT-base summarization model for Indonesian.
## Finetuning Corpus
`bert2bert-indonesian-summarization` model is based on `cahya/bert-base-indonesian-1.5G` by [cahya](https://huggingface.co/cahya), finetuned using [id_liputan6](https://huggingface.co/datasets/id_liputan6) dataset.
## Load Finetuned Model
```python
from transformers import BertTokenizer, EncoderDecoderModel
tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization")
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization")
```
## Code Sample
```python
from transformers import BertTokenizer, EncoderDecoderModel
tokenizer = BertTokenizer.from_pretrained("cahya/bert2bert-indonesian-summarization")
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
model = EncoderDecoderModel.from_pretrained("cahya/bert2bert-indonesian-summarization")
#
ARTICLE_TO_SUMMARIZE = ""
# generate summary
input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt')
summary_ids = model.generate(input_ids,
min_length=20,
max_length=80,
num_beams=10,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
no_repeat_ngram_size=2,
use_cache=True,
do_sample = True,
temperature = 0.8,
top_k = 50,
top_p = 0.95)
summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```
Output:
```
```
|
cahya/gpt2-medium-indonesian-story | 21366a4240e1903ff85b8a6cd936b4c6288bfc7e | 2021-09-03T17:46:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | cahya | null | cahya/gpt2-medium-indonesian-story | 11 | 1 | transformers | 11,038 | Entry not found |
cahya/wav2vec2-large-xlsr-basque | 7cb9afa381bfe89d8e221126cbd59dfd17bcbf79 | 2021-07-05T23:41:21.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"eu",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-basque | 11 | null | transformers | 11,039 | ---
language: eu
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Basque by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eu
type: common_voice
args: eu
metrics:
- name: Test WER
type: wer
value: 12.44
---
# Wav2Vec2-Large-XLSR-Basque
This is the model for Wav2Vec2-Large-XLSR-Basque, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Basque Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-basque")
model.to("cuda")
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 12.44 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cahya/wav2vec2-large-xlsr-turkish | fca7ef60a379c49399cee2a18fa7f67f1e47f2ed | 2021-07-06T00:06:48.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-turkish | 11 | null | transformers | 11,040 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 21.13
---
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.13 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
caixin1998/chinese-poetry-gpt2-pretrain | 3893b84e24410e9ca26db68ca772a6b2de2398b4 | 2021-05-21T14:42:36.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | caixin1998 | null | caixin1998/chinese-poetry-gpt2-pretrain | 11 | null | transformers | 11,041 | Entry not found |
cartyparty/DialoGPT-small-nerdherd | 2702a93c623602354261223661a84abebdf081c1 | 2021-09-01T00:42:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | cartyparty | null | cartyparty/DialoGPT-small-nerdherd | 11 | null | transformers | 11,042 | ---
tags:
- conversational
---
# inspired by greentext |
ccdv/lsg-camembert-base-4096 | 9808534d305a0f46101928c6be6b72410a3e051c | 2022-07-27T04:49:56.000Z | [
"pytorch",
"camembert",
"fill-mask",
"fr",
"transformers",
"long context",
"autotrain_compatible"
]
| fill-mask | false | ccdv | null | ccdv/lsg-camembert-base-4096 | 11 | 1 | transformers | 11,043 | ---
language: fr
tags:
- long context
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is adapted from [CamemBERT-base](https://huggingface.co/camembert-base) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-camembert-base-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-camembert-base-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096")
SENTENCES = "Paris est la <mask> de la France."
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES)
> 'Paris est la capitale de la France.'
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-camembert-base-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-camembert-base-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
**CamemBERT**
```
@inproceedings{Martin_2020,
doi = {10.18653/v1/2020.acl-main.645},
url = {https://doi.org/10.18653%2Fv1%2F2020.acl-main.645},
year = 2020,
publisher = {Association for Computational Linguistics},
author = {Louis Martin and Benjamin Muller and Pedro Javier Ortiz Su{\'{a}}rez and Yoann Dupont and Laurent Romary and {\'{E}}ric de la Clergeri and Djam{\'{e}} Seddah and Beno{\^{\i}}t Sagot},
title = {{CamemBERT}: a Tasty French Language Model},
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}
}
``` |
ceshine/TinyBERT_L-4_H-312_v2-distill-AllNLI | 578833747fd2988f4704d157eb14e202aa0607e6 | 2021-05-19T14:01:36.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | ceshine | null | ceshine/TinyBERT_L-4_H-312_v2-distill-AllNLI | 11 | null | transformers | 11,044 | # TinyBERT_L-4_H-312_v2 English Sentence Encoder
This is distilled from the `bert-base-nli-stsb-mean-tokens` pre-trained model from [Sentence-Transformers](https://sbert.net/).
The embedding vector is obtained by mean/average pooling of the last layer's hidden states.
Update 20210325: Added the attention matrices imitation objective as in the TinyBERT paper, and the distill target has been changed from `distilbert-base-nli-stsb-mean-tokens` to `bert-base-nli-stsb-mean-tokens` (they have almost the same STSb performance).
## Model Comparison
We compute cosine similarity scores of the embeddings of the sentence pair to get the spearman correlation on the STS benchmark (bigger is better):
| | Dev | Test |
| ------------------------------------ | ----- | ----- |
| bert-base-nli-stsb-mean-tokens | .8704 | .8505 |
| distilbert-base-nli-stsb-mean-tokens | .8667 | .8516 |
| TinyBERT_L-4_H-312_v2-distill-AllNLI | .8587 | .8283 |
| TinyBERT_L-4_H (20210325) | .8551 | .8341 |
|
chitra/finetuned-adversarial-paraphrasing-detector | fd268865a7cb28643d83dc767f6f40a1ee5ad7bc | 2022-01-18T12:55:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | chitra | null | chitra/finetuned-adversarial-paraphrasing-detector | 11 | null | transformers | 11,045 | Entry not found |
clulab/roberta-timex-semeval | 3b980ef837347209d299deb9c1ce8ca34957a487 | 2021-05-20T15:34:00.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | clulab | null | clulab/roberta-timex-semeval | 11 | null | transformers | 11,046 | Entry not found |
crang/wav2vec2-large-xlsr-53-tatar | d9030cc292a84ed19d7fa2db7fc47451d071aefa | 2021-07-06T00:58:16.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | crang | null | crang/wav2vec2-large-xlsr-53-tatar | 11 | null | transformers | 11,047 | ---
language: tt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Tatar XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tt
type: common_voice
args: tt
metrics:
- name: Test WER
type: wer
value: 30.93
---
# Wav2Vec2-Large-XLSR-53-Tatar
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tatar test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("crang/wav2vec2-large-xlsr-53-tatar")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\u2013\u2014\;\:\"\\%\\\]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 30.93 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
damlab/HIV_BERT | 38e5bb9ec8a7574d08b5bd9b402973f92fdbc093 | 2022-02-24T18:59:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:damlab/HIV_FLT",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | damlab | null | damlab/HIV_BERT | 11 | null | transformers | 11,048 | ---
license: mit
datasets:
- damlab/HIV_FLT
metrics:
- accuracy
widget:
- text: 'C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C'
example_title: 'V3'
- text: 'M E P V D P R L E P W K H P G S Q P K T A C T N C Y C K K C C F H C Q V C F I T K A L G I S Y G R K K R R Q R R R A H Q N S Q T H Q A S L S K Q P T S Q P R G D P T G P K E S K K K V E R E T E T D P F D'
example_title: 'Tat'
- text: 'P Q I T L W Q R P L V T I K I G G Q L K E A L L D T G A D D T V L E E M N L P G R W K P K M I G G I G G F I K V R Q Y D Q I L I E I C G H K A I G T V L V G P T P V N I I G R N L L T Q I G C T L N F'
example_title: 'PR'
---
# HIV_BERT model
## Table of Contents
- [Summary](#model-summary)
- [Model Description](#model-description)
- [Intended Uses & Limitations](#intended-uses-&-limitations)
- [How to Use](#how-to-use)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Training](#training)
- [Evaluation Results](#evaluation-results)
- [BibTeX Entry and Citation Info](#bibtex-entry-and-citation-info)
## Summary
The HIV-BERT model was trained as a refinement of the [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd) for HIV centric tasks. It was refined with whole viral genomes from the [Los Alamos HIV Sequence Database](https://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html). This pretraining is important for HIV related tasks as the original BFD database contains few viral proteins making it sub-optimal when used as the basis for transfer learning tasks. This model and other related HIV prediction tasks have been published (link).
## Model Description
Like the original [ProtBert-BFD model](https://huggingface.co/Rostlab/prot_bert_bfd), this model encodes each amino acid as an individual token. This model was trained using Masked Language Modeling: a process in which a random set of tokens are masked with the model trained on their prediction. This model was trained using the damlab/hiv-flt dataset with 256 amino acid chunks and a 15% mask rate.
## Intended Uses & Limitations
As a masked language model this tool can be used to predict expected mutations using a masking approach. This could be used to identify highly mutated sequences, sequencing artifacts, or other contexts. As a BERT model, this tool can also be used as the base for transfer learning. This pretrained model could be used as the base when developing HIV-specific classification tasks.
## How to use
As this is a BERT-style Masked Language learner, it can be used to determine the most likely amino acid at a masked position.
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="damlab/HIV_FLT")
unmasker(f"C T R P N [MASK] N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C")
[
{
"score": 0.9581968188285828,
"token": 17,
"token_str": "N",
"sequence": "C T R P N N N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.022986575961112976,
"token": 12,
"token_str": "K",
"sequence": "C T R P N K N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.003997281193733215,
"token": 14,
"token_str": "D",
"sequence": "C T R P N D N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.003636382520198822,
"token": 15,
"token_str": "T",
"sequence": "C T R P N T N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
},
{
"score": 0.002701344434171915,
"token": 10,
"token_str": "S",
"sequence": "C T R P N S N T R K S I R I Q R G P G R A F V T I G K I G N M R Q A H C"
}
]
```
## Training Data
The dataset [damlab/HIV_FLT](https://huggingface.co/datasets/damlab/HIV_FLT) was used to refine the original [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd). This dataset contains 1790 full HIV genomes from across the globe. When translated, these genomes contain approximately 3.9 million amino-acid tokens.
## Training Procedure
### Preprocessing
As with the [rostlab/Prot-bert-bfd](https://huggingface.co/Rostlab/prot_bert_bfd) model, the rare amino acids U, Z, O, and B were converted to X and spaces were added between each amino acid. All strings were concatenated and chunked into 256 token chunks for training. A random 20% of chunks were held for validation.
### Training
Training was performed with the HuggingFace training module using the MaskedLM data loader with a 15% masking rate. The learning rate was set at E-5, 50K warm-up steps, and a cosine_with_restarts learning rate schedule and continued until 3 consecutive epochs did not improve the loss on the held-out dataset.
## BibTeX Entry and Citation Info
[More Information Needed]
|
danasone/rubert-tiny-essay | 343329ca91cb7f519a8a3abf6b1719f297f42b61 | 2022-02-08T22:08:38.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | danasone | null | danasone/rubert-tiny-essay | 11 | null | transformers | 11,049 | Entry not found |
danielbubiola/bangla_asr | 55177dc9a92571793d3fa57ad9fd62338c65184c | 2022-01-26T07:42:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | danielbubiola | null | danielbubiola/bangla_asr | 11 | null | transformers | 11,050 | ---
tags:
- generated_from_trainer
model-index:
- name: bangla_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla_asr
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 157.8652
- Wer: 0.4507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2601.5363 | 7.46 | 500 | 259.6630 | 0.6863 |
| 417.7386 | 14.93 | 1000 | 156.6117 | 0.5275 |
| 262.9455 | 22.39 | 1500 | 155.0886 | 0.5006 |
| 178.7715 | 29.85 | 2000 | 155.1077 | 0.4840 |
| 132.448 | 37.31 | 2500 | 163.8623 | 0.4770 |
| 116.3943 | 44.78 | 3000 | 161.5531 | 0.4609 |
| 87.1653 | 52.24 | 3500 | 165.6857 | 0.4597 |
| 80.5606 | 59.7 | 4000 | 157.8652 | 0.4507 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
danurahul/wav2vec2-large-xlsr-pa-IN | 7e4299aaa0b440cb94ebe719781de820c6569f66 | 2021-07-06T01:28:14.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | danurahul | null | danurahul/wav2vec2-large-xlsr-pa-IN | 11 | null | transformers | 11,051 | ---
language: pa-IN
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: danurahul/wav2vec2-large-xlsr-pa-IN
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pa-IN
type: common_voice
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 54.86
---
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 100 %
## Training
The Common Voice `train`, `validation` was used for training as well as validation and testing #
The script used for training can be found https://github.com/rahul-art/huggingface_wav2vec2_punjabi/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Punjabi_ASR_with_%F0%9F%A4%97_Transformers.ipynb |
dbernsohn/algebra_linear_1d | f46dfa8313633510eed5f631d0c7ef2e1afc69c1 | 2021-02-03T07:09:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:algebra_linear_1d",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | dbernsohn | null | dbernsohn/algebra_linear_1d | 11 | null | transformers | 11,052 | # algebra_linear_1d
---
language: en
datasets:
- algebra_linear_1d
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_default_config) for solving **algebra 1d equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "Solve 0 = 1026*x - 2474 + 46592 for x"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> -41</s>
```
Another examples:
+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r.
+ Answer: -12 Pred: -12
----
+ Solve -119*k + 6*k - 117 - 352 = 322 for k.
+ Answer: -7 Pred: -7
----
+ Solve -547 = -62*t + 437 - 798 for t.
+ Answer: 3 Pred: 3
----
+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j.
+ Answer: -49 Pred: -49
----
+ Solve 3047*n - 6130*n - 1700 = -3049*n for n.
+ Answer: -50 Pred: -50
----
+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i.
+ Answer: -9 Pred: -9
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
deepset/bert-base-german-cased-oldvocab | 7e2765ba36d00041e567517642ffafb4cb2d06fb | 2021-10-21T12:16:47.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | deepset | null | deepset/bert-base-german-cased-oldvocab | 11 | 3 | transformers | 11,053 | ---
language: de
license: mit
thumbnail: https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png
tags:
- exbert
---
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased">
\t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
# German BERT with old vocabulary
For details see the related [FARM issue](https://github.com/deepset-ai/FARM/issues/60).
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
dexhrestha/Nepali-DistilBERT | 976530f2fdf06166c52eab9258c1f97287d24bcc | 2021-10-30T08:31:53.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | dexhrestha | null | dexhrestha/Nepali-DistilBERT | 11 | null | transformers | 11,054 | DistilBERT model trained on OSCAR nepali corpus from huggingface datasets.
We trained the DitilBERT language model on OSCAR nepali corpus and then for downstream sentiment analysis task. The dataset we used for sentiment analysis was first extracted from twitter filtering for devenagari text then labelled it as postive,negative and neutral. However, since neutral labels exceeded the positive and negative tweets we decided to use only positive and negative tweets for ease of training.
LABEL_1 = negative
LABEL_0 = positive |
dhlpricing/MyGPT2TG-cased-v1 | 97219ebf67662dcb8ff456b5dd8964dbc9372df9 | 2021-11-19T16:43:44.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | dhlpricing | null | dhlpricing/MyGPT2TG-cased-v1 | 11 | null | transformers | 11,055 | Entry not found |
diegozs97/finetuned-chemprot-seed-1-2000k | e799ecc511e9a847f843e6b6d93f0e7fbb95307f | 2021-12-07T05:26:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | diegozs97 | null | diegozs97/finetuned-chemprot-seed-1-2000k | 11 | null | transformers | 11,056 | Entry not found |
dshvadskiy/bert-finetuned-ner-accelerate | 49384111a27f7ccc28f45a4ef1a7a383d8508ec3 | 2022-01-17T18:04:23.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | dshvadskiy | null | dshvadskiy/bert-finetuned-ner-accelerate | 11 | null | transformers | 11,057 | Entry not found |
dsksd/collector_multiwoz | 25afbfcb20c2ab3506ed1e924511560b78cf8aaf | 2021-07-16T07:13:03.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers"
]
| feature-extraction | false | dsksd | null | dsksd/collector_multiwoz | 11 | null | transformers | 11,058 | Entry not found |
elgeish/wav2vec2-base-timit-asr | 039d878c5ab8656771a6d0254c0c0621ca515f34 | 2021-07-06T01:37:40.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:timit_asr",
"transformers",
"audio",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | elgeish | null | elgeish/wav2vec2-base-timit-asr | 11 | null | transformers | 11,059 | ---
language: en
datasets:
- timit_asr
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
---
# Wav2Vec2-Base-TIMIT
Fine-tuned [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base)
on the [timit_asr dataset](https://huggingface.co/datasets/timit_asr).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model_name = "elgeish/wav2vec2-base-timit-asr"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.eval()
dataset = load_dataset("timit_asr", split="test").shuffle().select(range(10))
char_translations = str.maketrans({"-": " ", ",": "", ".": "", "?": ""})
def prepare_example(example):
example["speech"], _ = sf.read(example["file"])
example["text"] = example["text"].translate(char_translations)
example["text"] = " ".join(example["text"].split()) # clean up whitespaces
example["text"] = example["text"].lower()
return example
dataset = dataset.map(prepare_example, remove_columns=["file"])
inputs = processor(dataset["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
with torch.no_grad():
predicted_ids = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted_ids[predicted_ids == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
predicted_transcripts = processor.tokenizer.batch_decode(predicted_ids)
for reference, predicted in zip(dataset["text"], predicted_transcripts):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: she had your dark suit in greasy wash water all year
predicted: she had your dark suit in greasy wash water all year
--
reference: where were you while we were away
predicted: where were you while we were away
--
reference: cory and trish played tag with beach balls for hours
predicted: tcory and trish played tag with beach balls for hours
--
reference: tradition requires parental approval for under age marriage
predicted: tradition requires parrental proval for under age marrage
--
reference: objects made of pewter are beautiful
predicted: objects made of puder are bautiful
--
reference: don't ask me to carry an oily rag like that
predicted: don't o ask me to carry an oily rag like that
--
reference: cory and trish played tag with beach balls for hours
predicted: cory and trish played tag with beach balls for ours
--
reference: don't ask me to carry an oily rag like that
predicted: don't ask me to carry an oily rag like that
--
reference: don't do charlie's dirty dishes
predicted: don't do chawly's tirty dishes
--
reference: only those story tellers will remain who can imitate the style of the virtuous
predicted: only those story tillaers will remain who can imvitate the style the virtuous
```
## Fine-Tuning Script
You can find the script used to produce this model
[here](https://github.com/elgeish/transformers/blob/cfc0bd01f2ac2ea3a5acc578ef2e204bf4304de7/examples/research_projects/wav2vec2/finetune_base_timit_asr.sh).
|
eliza-dukim/bert-base-finetuned-sts | d77333cf8d6a14fbf5fd3801f51108518a918cb9 | 2021-09-22T11:01:03.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | eliza-dukim | null | eliza-dukim/bert-base-finetuned-sts | 11 | null | transformers | 11,060 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
- f1
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8756147003619346
- name: F1
type: f1
value: 0.8416666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4115
- Pearsonr: 0.8756
- F1: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7836 | 1.0 | 365 | 0.5507 | 0.8435 | 0.8121 |
| 0.1564 | 2.0 | 730 | 0.4396 | 0.8495 | 0.8136 |
| 0.0989 | 3.0 | 1095 | 0.4115 | 0.8756 | 0.8417 |
| 0.0682 | 4.0 | 1460 | 0.4466 | 0.8746 | 0.8449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-gl-CV8 | 6fabd3e77199283645e47941427e53cdcf366c37 | 2022-03-23T18:34:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | emre | null | emre/wav2vec2-xls-r-300m-gl-CV8 | 11 | null | transformers | 11,061 | ---
license: apache-2.0
language: gl
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-gl-CV8
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice gl
type: common_voice
args: gl
metrics:
- name: Test WER
type: wer
value: 0.208
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gl
metrics:
- name: Test WER
type: wer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 47.82
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 50.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gl-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Wer: 0.2080
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9427 | 4.9 | 500 | 2.8801 | 1.0 |
| 2.1594 | 9.8 | 1000 | 0.4092 | 0.4001 |
| 0.7332 | 14.71 | 1500 | 0.2151 | 0.2080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emrecan/distilbert-base-turkish-cased-allnli_tr | f4efcd8d8c47418033041fad95ff351ebb62ff01 | 2021-12-02T14:57:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
]
| zero-shot-classification | false | emrecan | null | emrecan/distilbert-base-turkish-cased-allnli_tr | 11 | null | transformers | 11,062 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yükselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo çok saçmaydı, beğendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
- Accuracy: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.94 | 0.03 | 1000 | 0.9074 | 0.5813 |
| 0.8102 | 0.07 | 2000 | 0.8802 | 0.5949 |
| 0.7737 | 0.1 | 3000 | 0.8491 | 0.6155 |
| 0.7576 | 0.14 | 4000 | 0.8283 | 0.6261 |
| 0.7286 | 0.17 | 5000 | 0.8150 | 0.6362 |
| 0.7162 | 0.2 | 6000 | 0.7998 | 0.6400 |
| 0.7092 | 0.24 | 7000 | 0.7830 | 0.6565 |
| 0.6962 | 0.27 | 8000 | 0.7653 | 0.6629 |
| 0.6876 | 0.31 | 9000 | 0.7630 | 0.6687 |
| 0.6778 | 0.34 | 10000 | 0.7475 | 0.6739 |
| 0.6737 | 0.37 | 11000 | 0.7495 | 0.6781 |
| 0.6712 | 0.41 | 12000 | 0.7350 | 0.6826 |
| 0.6559 | 0.44 | 13000 | 0.7274 | 0.6897 |
| 0.6493 | 0.48 | 14000 | 0.7248 | 0.6902 |
| 0.6483 | 0.51 | 15000 | 0.7263 | 0.6858 |
| 0.6445 | 0.54 | 16000 | 0.7070 | 0.6978 |
| 0.6467 | 0.58 | 17000 | 0.7083 | 0.6981 |
| 0.6332 | 0.61 | 18000 | 0.6996 | 0.7004 |
| 0.6288 | 0.65 | 19000 | 0.6979 | 0.6978 |
| 0.6308 | 0.68 | 20000 | 0.6912 | 0.7040 |
| 0.622 | 0.71 | 21000 | 0.6904 | 0.7092 |
| 0.615 | 0.75 | 22000 | 0.6872 | 0.7094 |
| 0.6186 | 0.78 | 23000 | 0.6877 | 0.7075 |
| 0.6183 | 0.82 | 24000 | 0.6818 | 0.7111 |
| 0.6115 | 0.85 | 25000 | 0.6856 | 0.7122 |
| 0.608 | 0.88 | 26000 | 0.6697 | 0.7179 |
| 0.6071 | 0.92 | 27000 | 0.6727 | 0.7181 |
| 0.601 | 0.95 | 28000 | 0.6798 | 0.7118 |
| 0.6018 | 0.99 | 29000 | 0.6854 | 0.7071 |
| 0.5762 | 1.02 | 30000 | 0.6697 | 0.7214 |
| 0.5507 | 1.05 | 31000 | 0.6710 | 0.7185 |
| 0.5575 | 1.09 | 32000 | 0.6709 | 0.7226 |
| 0.5493 | 1.12 | 33000 | 0.6659 | 0.7191 |
| 0.5464 | 1.15 | 34000 | 0.6709 | 0.7232 |
| 0.5595 | 1.19 | 35000 | 0.6642 | 0.7220 |
| 0.5446 | 1.22 | 36000 | 0.6709 | 0.7202 |
| 0.5524 | 1.26 | 37000 | 0.6751 | 0.7148 |
| 0.5473 | 1.29 | 38000 | 0.6642 | 0.7209 |
| 0.5477 | 1.32 | 39000 | 0.6662 | 0.7223 |
| 0.5522 | 1.36 | 40000 | 0.6586 | 0.7227 |
| 0.5406 | 1.39 | 41000 | 0.6602 | 0.7258 |
| 0.54 | 1.43 | 42000 | 0.6564 | 0.7273 |
| 0.5458 | 1.46 | 43000 | 0.6780 | 0.7213 |
| 0.5448 | 1.49 | 44000 | 0.6561 | 0.7235 |
| 0.5418 | 1.53 | 45000 | 0.6600 | 0.7253 |
| 0.5408 | 1.56 | 46000 | 0.6616 | 0.7274 |
| 0.5451 | 1.6 | 47000 | 0.6557 | 0.7283 |
| 0.5385 | 1.63 | 48000 | 0.6583 | 0.7295 |
| 0.5261 | 1.66 | 49000 | 0.6468 | 0.7325 |
| 0.5364 | 1.7 | 50000 | 0.6447 | 0.7329 |
| 0.5294 | 1.73 | 51000 | 0.6429 | 0.7320 |
| 0.5332 | 1.77 | 52000 | 0.6508 | 0.7272 |
| 0.5274 | 1.8 | 53000 | 0.6492 | 0.7326 |
| 0.5286 | 1.83 | 54000 | 0.6470 | 0.7318 |
| 0.5359 | 1.87 | 55000 | 0.6393 | 0.7354 |
| 0.5366 | 1.9 | 56000 | 0.6445 | 0.7367 |
| 0.5296 | 1.94 | 57000 | 0.6413 | 0.7313 |
| 0.5346 | 1.97 | 58000 | 0.6393 | 0.7315 |
| 0.5264 | 2.0 | 59000 | 0.6448 | 0.7357 |
| 0.4857 | 2.04 | 60000 | 0.6640 | 0.7335 |
| 0.4888 | 2.07 | 61000 | 0.6612 | 0.7318 |
| 0.4964 | 2.11 | 62000 | 0.6516 | 0.7337 |
| 0.493 | 2.14 | 63000 | 0.6503 | 0.7356 |
| 0.4961 | 2.17 | 64000 | 0.6519 | 0.7348 |
| 0.4847 | 2.21 | 65000 | 0.6517 | 0.7327 |
| 0.483 | 2.24 | 66000 | 0.6555 | 0.7310 |
| 0.4857 | 2.28 | 67000 | 0.6525 | 0.7312 |
| 0.484 | 2.31 | 68000 | 0.6444 | 0.7342 |
| 0.4792 | 2.34 | 69000 | 0.6508 | 0.7330 |
| 0.488 | 2.38 | 70000 | 0.6513 | 0.7344 |
| 0.472 | 2.41 | 71000 | 0.6547 | 0.7346 |
| 0.4872 | 2.45 | 72000 | 0.6500 | 0.7342 |
| 0.4782 | 2.48 | 73000 | 0.6585 | 0.7358 |
| 0.481 | 2.51 | 74000 | 0.6477 | 0.7356 |
| 0.4822 | 2.55 | 75000 | 0.6587 | 0.7346 |
| 0.4728 | 2.58 | 76000 | 0.6572 | 0.7340 |
| 0.4841 | 2.62 | 77000 | 0.6443 | 0.7374 |
| 0.4885 | 2.65 | 78000 | 0.6494 | 0.7362 |
| 0.4752 | 2.68 | 79000 | 0.6509 | 0.7382 |
| 0.4883 | 2.72 | 80000 | 0.6457 | 0.7371 |
| 0.4888 | 2.75 | 81000 | 0.6497 | 0.7364 |
| 0.4844 | 2.79 | 82000 | 0.6481 | 0.7376 |
| 0.4833 | 2.82 | 83000 | 0.6451 | 0.7389 |
| 0.48 | 2.85 | 84000 | 0.6423 | 0.7373 |
| 0.4832 | 2.89 | 85000 | 0.6477 | 0.7357 |
| 0.4805 | 2.92 | 86000 | 0.6464 | 0.7379 |
| 0.4775 | 2.96 | 87000 | 0.6477 | 0.7380 |
| 0.4843 | 2.99 | 88000 | 0.6481 | 0.7381 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
espejelomar/BETO_Clasificar_Tweets_Mexicano | 76b10cdd268c03b4882f4ce6a65b5b2bbb77c1c8 | 2022-02-15T17:42:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | espejelomar | null | espejelomar/BETO_Clasificar_Tweets_Mexicano | 11 | null | transformers | 11,063 | Entry not found |
ewriji/heil-A.412C-classification | da2c8b313ddb15219b1420eb80f5b591c2efa67d | 2021-12-17T01:11:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ewriji | null | ewriji/heil-A.412C-classification | 11 | null | transformers | 11,064 | Entry not found |
fabriceyhc/bert-base-uncased-dbpedia_14 | 1fbfc3deaa280fcf16372746ca21363313357376 | 2021-09-21T00:56:12.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:dbpedia_14",
"transformers",
"generated_from_trainer",
"sibyl",
"license:apache-2.0",
"model-index"
]
| text-classification | false | fabriceyhc | null | fabriceyhc/bert-base-uncased-dbpedia_14 | 11 | null | transformers | 11,065 | ---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- dbpedia_14
metrics:
- accuracy
model-index:
- name: bert-base-uncased-dbpedia_14
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: dbpedia_14
type: dbpedia_14
args: dbpedia_14
metrics:
- name: Accuracy
type: accuracy
value: 0.9902857142857143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-dbpedia_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the dbpedia_14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0547
- Accuracy: 0.9903
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 34650
- training_steps: 346500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7757 | 0.03 | 2000 | 0.2732 | 0.9880 |
| 0.1002 | 0.06 | 4000 | 0.0620 | 0.9891 |
| 0.0547 | 0.09 | 6000 | 0.0723 | 0.9879 |
| 0.0558 | 0.12 | 8000 | 0.0678 | 0.9875 |
| 0.0534 | 0.14 | 10000 | 0.0554 | 0.9896 |
| 0.0632 | 0.17 | 12000 | 0.0670 | 0.9888 |
| 0.0612 | 0.2 | 14000 | 0.0733 | 0.9873 |
| 0.0667 | 0.23 | 16000 | 0.0623 | 0.9896 |
| 0.0636 | 0.26 | 18000 | 0.0836 | 0.9868 |
| 0.0705 | 0.29 | 20000 | 0.0776 | 0.9855 |
| 0.0726 | 0.32 | 22000 | 0.0805 | 0.9861 |
| 0.0778 | 0.35 | 24000 | 0.0713 | 0.9870 |
| 0.0713 | 0.38 | 26000 | 0.1277 | 0.9805 |
| 0.0965 | 0.4 | 28000 | 0.0810 | 0.9855 |
| 0.0881 | 0.43 | 30000 | 0.0910 | 0.985 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
facebook/wav2vec2-base-es-voxpopuli | 2fdabe011433c833551e92bda35594df0a18a2ee | 2021-07-06T01:53:59.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"es",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
]
| automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-es-voxpopuli | 11 | null | transformers | 11,066 | ---
language: es
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the es unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/wav2vec2-base-it-voxpopuli | 593e75291702d2ca8d404d63b2b47e6f028f8f39 | 2021-07-06T01:54:46.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"it",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"license:cc-by-nc-4.0"
]
| automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-it-voxpopuli | 11 | null | transformers | 11,067 | ---
language: it
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the it unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
flax-community/arabic-t5-small | 887b7a5f66bc2121495ed99d662449048b3c71f1 | 2021-07-29T23:37:03.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"ar",
"dataset:mc4",
"dataset:oscar",
"dataset:arabic_billion_words",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | flax-community | null | flax-community/arabic-t5-small | 11 | 1 | transformers | 11,068 | ---
language:
- ar
datasets:
- mc4
- oscar
- arabic_billion_words
---
# arabic-t5-small
This is a T5v1.1 (small) trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets.
The model could only be trained for about `10%` of the whole dataset due to time limitations. This is equivalent to `22'000` steps or about `4.3` Billion tokens.
## Training parameters
| | |
| :-------------------: | :-----------: |
| Training batch size | `384` |
| Evaluation batch size | `768` |
| learning rate | `1e-2` |
| dtype | `jnp.float32` |
## Preprocessing and the tokenizer
We tried to keep the preprocessing to a bare minimum. We only replaced URLs, emails and social media user mentions with fixed tokens.
Contrary to other pretrained Arabic LMs, we decided to not strip the Arabic diacritics and to keep them part of the vocabulary.
The tokenizer was trained on `5%` of the training set, with a vocabulary size of `64'000`.
For more details about preprocessing, check the [tokenizer code](https://huggingface.co/flax-community/arabic-t5-small/blob/main/t5_tokenizer_model.py)
## Data
The model was trained on the concatenation of the Arabic Billion Words corpus and the Arabic subsets of the mC4 and Oscar datasets.
A random `0.1%` subset of the data was reserved for evaluation and the rest for training.
## Results
| | |
| :-----------------: | :-----------: |
| Evaluation accuracy | `56.84%` |
| Evaluation Loss | `2.423` |
| Training Loss | `2.392` |
| Training Time | `22h 23m 51s` |
## Note for finetuning
This model was pretrained with dropout turned off, so the default `dropout_rate` in the model config is `0`.
To finetune the model dropout should be turned be back on, like this:
```python
model = T5ForConditionalGeneration.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1)
```
or,
```python
model = AutoModelForSeq2SeqLM.from_pretrained("flax-community/arabic-t5-small", dropout_rate=0.1)
```
|
gagan3012/k2t-tiny | 423c0d1dee6ef6b300cd6f1abac11a7d845dd4a8 | 2021-09-22T08:27:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WebNLG",
"dataset:Dart",
"transformers",
"keytotext",
"k2t-tiny",
"Keywords to Sentences",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | gagan3012 | null | gagan3012/k2t-tiny | 11 | null | transformers | 11,069 | ---
language: en
thumbnail: Keywords to Sentences
tags:
- keytotext
- k2t-tiny
- Keywords to Sentences
license: mit
datasets:
- WebNLG
- Dart
metrics:
- NLG
---
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
gagan3012/pickuplines | 3078b63c12a4c9f6cb5c262348b56873e2e3e83f | 2021-10-18T19:53:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | gagan3012 | null | gagan3012/pickuplines | 11 | null | transformers | 11,070 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pickuplines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pickuplines
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
gbade786/distilbert-base-uncased-finetuned-emotion | 9f725e4da377629d30bea790a34946abe54d9ff9 | 2022-01-14T14:44:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | gbade786 | null | gbade786/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,071 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233262687967644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8217 | 1.0 | 250 | 0.3137 | 0.903 | 0.8999 |
| 0.2484 | 2.0 | 500 | 0.2180 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gchhablani/wav2vec2-large-xlsr-pt | e32f8fda6733db09e876e3a8059fc7b441197cf1 | 2021-07-06T05:23:19.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | gchhablani | null | gchhablani/wav2vec2-large-xlsr-pt | 11 | null | transformers | 11,072 | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Portugese by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 17.22
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\;\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.22 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://github.com/jqueguiner/wav2vec2-sprint/blob/main/run_common_voice.py).
The parameters passed were:
```bash
#!/usr/bin/env bash
python run_common_voice.py \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="pt" \
--output_dir=/workspace/output_models/pt/wav2vec2-large-xlsr-pt \
--cache_dir=/workspace/data \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--evaluation_strategy="steps" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--fp16 \
--freeze_feature_extractor \
--save_steps="500" \
--eval_steps="500" \
--save_total_limit="1" \
--logging_steps="500" \
--group_by_length \
--feat_proj_dropout="0.0" \
--layerdrop="0.1" \
--gradient_checkpointing \
--do_train --do_eval \
```
Notebook containing the evaluation can be found [here](https://colab.research.google.com/drive/14e-zNK_5pm8EMY9EbeZerpHx7WsGycqG?usp=sharing). |
ghadeermobasher/bc4chemd-imbalanced-biobert-base-casesd-v1.1 | 5cbf6f489b82f0dd35cbc1433d2445c63cdd7930 | 2022-02-04T07:42:48.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/bc4chemd-imbalanced-biobert-base-casesd-v1.1 | 11 | null | transformers | 11,073 | Entry not found |
google/t5-xxl-ssm-wq | e9f808d09b78ef1bb19e1186a7884ef42234d65d | 2020-12-07T12:35:31.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:web_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-xxl-ssm-wq | 11 | 1 | transformers | 11,074 | ---
language: en
datasets:
- c4
- wikipedia
- web_questions
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Web Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-wq|44.7|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-wq**|**43.5**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-wq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-wq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/tapas-mini-finetuned-tabfact | 2ee10173c30cf3adb08636740d75def8a6737987 | 2021-11-29T13:06:50.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
]
| text-classification | false | google | null | google/tapas-mini-finetuned-tabfact | 11 | null | transformers | 11,075 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS mini model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_mini`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
hadxu/distilbert-base-uncased-finetuned-emotion | ab5baf280dd140d18a2cd43b5f1fc2d02bb93804 | 2022-02-10T11:20:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | hadxu | null | hadxu/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,076 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.9202797627524772
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Accuracy: 0.92
- F1: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8397 | 1.0 | 250 | 0.3345 | 0.9045 | 0.9007 |
| 0.2544 | 2.0 | 500 | 0.2307 | 0.92 | 0.9203 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
howey/electra-large-qqp | 1fda2eeefe4a766ebf81e9e6f62250f722a25a9b | 2021-07-26T02:47:52.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | howey | null | howey/electra-large-qqp | 11 | 1 | transformers | 11,077 | Entry not found |
huawei-noah/TernaryBERT_MNLI | 338819666c7bb32e6f5b39136e253fc918833143 | 2020-10-16T03:07:54.000Z | [
"pytorch",
"transformers"
]
| null | false | huawei-noah | null | huawei-noah/TernaryBERT_MNLI | 11 | null | transformers | 11,078 | Entry not found |
huggingartists/arctic-monkeys | ffd5168fb1837f5ad628e00c49c1158f71e4a676 | 2021-10-26T17:28:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/arctic-monkeys",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/arctic-monkeys | 11 | null | transformers | 11,079 | ---
language: en
datasets:
- huggingartists/arctic-monkeys
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/12c27f4fbb06ef32dc1c1e432098f447.570x570x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Arctic Monkeys</div>
<a href="https://genius.com/artists/arctic-monkeys">
<div style="text-align: center; font-size: 14px;">@arctic-monkeys</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Arctic Monkeys.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/arctic-monkeys).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/arctic-monkeys")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1x4ii6qz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Arctic Monkeys's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/arctic-monkeys')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/arctic-monkeys")
model = AutoModelWithLMHead.from_pretrained("huggingartists/arctic-monkeys")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/miyagi | 1caa528be4d25fbf1a5ff48e3c359b89d2f6a174 | 2022-07-04T16:58:30.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/miyagi",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/miyagi | 11 | null | transformers | 11,080 | ---
language: en
datasets:
- huggingartists/miyagi
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b6e783ce8d8c51516715e291dbc87535.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Miyagi</div>
<a href="https://genius.com/artists/miyagi">
<div style="text-align: center; font-size: 14px;">@miyagi</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Miyagi.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/miyagi).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/miyagi")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1c4sny4a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Miyagi's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1v51pw0u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1v51pw0u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/miyagi')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/miyagi")
model = AutoModelWithLMHead.from_pretrained("huggingartists/miyagi")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/aoc | f3b343ed21a095e9fbe1e4926129be99211a6ba7 | 2022-07-22T22:26:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/aoc | 11 | null | transformers | 11,081 | ---
language: en
thumbnail: http://www.huggingtweets.com/aoc/1658528812949/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/923274881197895680/AbHcStkl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alexandria Ocasio-Cortez</div>
<div style="text-align: center; font-size: 14px;">@aoc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alexandria Ocasio-Cortez.
| Data | Alexandria Ocasio-Cortez |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 1253 |
| Short tweets | 126 |
| Tweets kept | 1842 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3i05suuv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aoc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gjmi5b8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gjmi5b8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aoc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/bichebuni | 454afafb61458fed65cafb9e08f767c9653e9371 | 2021-05-21T20:37:06.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/bichebuni | 11 | null | transformers | 11,082 | ---
language: en
thumbnail: https://www.huggingtweets.com/bichebuni/1614096170963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1356414477143519232/H2T46KhD_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Ellie 🐰 🤖 AI Bot </div>
<div style="font-size: 15px">@bichebuni bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@bichebuni's tweets](https://twitter.com/bichebuni).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1578 |
| Retweets | 559 |
| Short tweets | 216 |
| Tweets kept | 803 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jluupd2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bichebuni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2a0ttba9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2a0ttba9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bichebuni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sigittanew | 0a4b9c3f6c78a2a1a0e315070edb6d0db03ec845 | 2021-05-22T22:54:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/sigittanew | 11 | null | transformers | 11,083 | ---
language: en
thumbnail: https://www.huggingtweets.com/sigittanew/1617902420104/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1315307002999058432/Z4YtauZI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">☃️Sigitta🎅 🤖 AI Bot </div>
<div style="font-size: 15px">@sigittanew bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sigittanew's tweets](https://twitter.com/sigittanew).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 1319 |
| Short tweets | 109 |
| Tweets kept | 1788 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ecj53ccd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sigittanew's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jm7ev1c0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jm7ev1c0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sigittanew')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hyunwoongko/brainbert-base-ko-kornli | 2bceaf8392c8c1427653f924da8a146bb315a993 | 2022-01-07T06:35:58.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | hyunwoongko | null | hyunwoongko/brainbert-base-ko-kornli | 11 | null | transformers | 11,084 | Entry not found |
iarfmoose/wav2vec2-large-xlsr-sorbian | ad782fe1576f3a1103e4b6d419ba8d6a9eb9f772 | 2021-07-06T06:01:40.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | iarfmoose | null | iarfmoose/wav2vec2-large-xlsr-sorbian | 11 | null | transformers | 11,085 | ---
language: hsb
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Sorbian by Adam Montgomerie
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hsb
type: common_voice
args: hsb
metrics:
- name: Test WER
type: wer
value: 41.74
---
# Wav2Vec2-Large-XLSR-53-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Sorbian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
tbatch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hsb", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model = Wav2Vec2ForCTC.from_pretrained("iarfmoose/wav2vec2-large-xlsr-sorbian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\–\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\¬\\\\\\\\\\\\\\\\⅛]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 41.74 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/XLSR_Sorbian.ipynb)
A notebook of the evaluation script can be found [here](https://github.com/AMontgomerie/wav2vec2-xlsr/blob/main/Sorbian/wav2vec2_hsb_eval.ipynb) |
ikevin98/bert-base-uncased-finetuned-sst2-sst2-membership | b3ca0fa795d032cadc71cbdf6ea978501549a0eb | 2021-09-04T20:10:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | ikevin98 | null | ikevin98/bert-base-uncased-finetuned-sst2-sst2-membership | 11 | null | transformers | 11,086 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
name: bert-base-uncased-finetuned-sst2-sst2-membership
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2-sst2-membership
This model is a fine-tuned version of [ikevin98/bert-base-uncased-finetuned-sst2](https://huggingface.co/ikevin98/bert-base-uncased-finetuned-sst2) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3100
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5125 | 1.0 | 3813 | 1.3100 | 1.0 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
it5/mt5-small-informal-to-formal | 71f9ee387b8f0b1b863de70377866c4f265ffea6 | 2022-03-09T07:49:29.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/mt5-small-informal-to-formal | 11 | null | transformers | 11,087 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"
- text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!"
- text: "nn capisco xke tt i ragazzi lo fanno"
- text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"
metrics:
- rouge
- bertscore
model-index:
- name: mt5-small-informal-to-formal
results:
- task:
type: formality-style-transfer
name: "Informal-to-formal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.638
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.446
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.620
name: "Avg. Test RougeL"
- type: bertscore
value: 0.684
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# mT5 Small for Informal-to-formal Style Transfer 🧐
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/mt5-small-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jcblaise/electra-tagalog-small-cased-discriminator | 79952fd321b77d1292385822eeaea2e7bd4342da | 2021-11-12T03:23:59.000Z | [
"pytorch",
"electra",
"pretraining",
"tl",
"transformers",
"tagalog",
"filipino",
"license:gpl-3.0"
]
| null | false | jcblaise | null | jcblaise/electra-tagalog-small-cased-discriminator | 11 | null | transformers | 11,088 | ---
language: tl
tags:
- electra
- tagalog
- filipino
license: gpl-3.0
inference: false
---
**Deprecation Notice**
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
---
# ELECTRA Tagalog Small Cased Discriminator
Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@inproceedings{cruz2021exploiting,
title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets},
author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth},
booktitle={Pacific Rim International Conference on Artificial Intelligence},
pages={86--99},
year={2021},
organization={Springer}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
|
joonhan/roberta-roa | df85c06271eaa6bc4ea2a601d7b0301575b21109 | 2021-10-08T02:05:28.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | joonhan | null | joonhan/roberta-roa | 11 | null | transformers | 11,089 | * Fine-tunning "KLUE/roberta-large" model For CER(Company Entity Recognition) With Custom Dataset
* Custom Datasets are composed of news data
```python
label_list = ['O',"B-PER","I-PER","B-ORG","I-ORG","B-COM","I-COM","B-LOC","I-LOC","B-DAT","I-DAT","B-TIM","I-TIM","B-QNT","I-QNT"]
refer_list = ['0','1','2','3','4','5','6','7','8','9','10','11','12','13','14']
```
- EX: "B-PER" : 1 , "B-COM" : 5 |
joaoalvarenga/model-sid-voxforge-cetuc-2 | 3131f140dd5427fc06cbb81e69ce5e8472e7c328 | 2021-07-06T08:45:23.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/model-sid-voxforge-cetuc-2 | 11 | null | transformers | 11,090 | Entry not found |
joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-0-new | cba5734fc9160b720e910be953aa58cb93f776e1 | 2021-07-12T12:26:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | joaoalvarenga | null | joaoalvarenga/wav2vec2-cetuc-sid-voxforge-mls-0-new | 11 | null | transformers | 11,091 | Entry not found |
jsgao/bart-eli5c | 8a480cb69f41b15e80bd839208f10f6843e9ae27 | 2021-12-14T21:09:14.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:eli5_category",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | jsgao | null | jsgao/bart-eli5c | 11 | null | transformers | 11,092 | ---
language: en
license: MIT
datasets:
- eli5_category
---
Answer generator model of [ELI5-Category Dataset](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/) |
juliusco/biobert-base-cased-v1.1-squad-finetuned-covdrobert | 4a76b5e23ecc89d08691bf22355009dc29190d57 | 2021-12-14T10:28:15.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | juliusco | null | juliusco/biobert-base-cased-v1.1-squad-finetuned-covdrobert | 11 | null | transformers | 11,093 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: biobert-base-cased-v1.1-squad-finetuned-covdrobert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.1-squad-finetuned-covdrobert
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1-squad](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-squad) on the covid_qa_deepset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 486 | 0.3787 |
| 0.161 | 2.0 | 972 | 0.3959 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jwa018/norwegian_parliament | fe3b549b8f5b35fec785ffb785f6d904971f9850 | 2021-10-24T21:58:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jwa018 | null | jwa018/norwegian_parliament | 11 | null | transformers | 11,094 | Entry not found |
k-partha/decision_style_bert_bio | 49717e1acd15bd62878039b99d9e4709e860c102 | 2022-01-29T03:36:37.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"transformers"
]
| text-classification | false | k-partha | null | k-partha/decision_style_bert_bio | 11 | null | transformers | 11,095 | Rates Twitter biographies on decision-making preference: Judging (focused, goal-oriented decision strategy) or Prospecting (open-ended, explorative strategy). Roughly corresponds to [conscientiousness](https://en.wikipedia.org/wiki/Conscientiousness)
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label.
Have fun!
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402). |
k-partha/extrabert_bio | 36d571d7fd0d01e2684f5272789c98d8521b99f1 | 2022-01-29T03:36:11.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2109.06402",
"transformers"
]
| text-classification | false | k-partha | null | k-partha/extrabert_bio | 11 | null | transformers | 11,096 | Classifies Twitter biographies as either introverts or extroverts.
Go to your Twitter profile, copy your biography and paste in the inference widget, remove any URLs and press hit!
Trained on self-described personality labels. Interpret as a continuous score, not as a discrete label. Have fun!
Barack Obama: Extrovert; Ellen DeGeneres: Extrovert; Naomi Osaka: Introvert
Note: Performance on inputs other than Twitter biographies [the training data source] is not verified.
For further details and expected performance, read the [paper](https://arxiv.org/abs/2109.06402). |
kbhugging/autonlp-text2sql-18413376 | 5650cb482cdec04eadaf364997f22b5d9dad2dea | 2021-10-15T02:36:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:kbhugging/autonlp-data-text2sql",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | kbhugging | null | kbhugging/autonlp-text2sql-18413376 | 11 | null | transformers | 11,097 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kbhugging/autonlp-data-text2sql
co2_eq_emissions: 1.4091714704861447
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376
``` |
kingabzpro/wav2vec2-60-urdu | 128cf5bef2519dfb55b1316bc91afe6ec8ab842a | 2022-03-23T18:27:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-60-urdu | 11 | 1 | transformers | 11,098 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-60-urdu
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_7_0
name: Common Voice ur
args: ur
metrics:
- type: wer
value: 59.1
name: Test WER
args:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
- type: cer
value: 33.1
name: Test CER
args:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
---
# wav2vec2-large-xlsr-53-urdu
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.5913
- Cer: 0.3310
## Model description
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
## Training procedure
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 12.6045 | 8.33 | 100 | 8.4997 | 0.6978 | 0.3923 |
| 1.3367 | 16.67 | 200 | 5.0015 | 0.6515 | 0.3556 |
| 0.5344 | 25.0 | 300 | 9.3687 | 0.6393 | 0.3625 |
| 0.2922 | 33.33 | 400 | 9.2381 | 0.6236 | 0.3432 |
| 0.1867 | 41.67 | 500 | 6.2150 | 0.6035 | 0.3448 |
| 0.1166 | 50.0 | 600 | 6.4496 | 0.5913 | 0.3310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
kingabzpro/wav2vec2-large-xls-r-1b-Indonesian | 8536e2edf49b1347b4503d12b63b506770117f87 | 2022-03-23T18:29:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | kingabzpro | null | kingabzpro/wav2vec2-large-xls-r-1b-Indonesian | 11 | 1 | transformers | 11,099 | ---
language:
- id
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
model-index:
- name: wav2vec2-large-xls-r-1b-Indonesian
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice id
args: id
metrics:
- type: wer
value: 45.51
name: Test WER
- type: cer
value: 16.43
name: Test CER
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: id
metrics:
- name: Test WER
type: wer
value: 72.73
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: id
metrics:
- name: Test WER
type: wer
value: 79.29
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9550
- Wer: 0.4551
- Cer: 0.1643
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.663 | 7.69 | 200 | 0.7898 | 0.6039 | 0.1848 |
| 0.7424 | 15.38 | 400 | 1.0215 | 0.5615 | 0.1924 |
| 0.4494 | 23.08 | 600 | 1.0901 | 0.5249 | 0.1932 |
| 0.5075 | 30.77 | 800 | 1.1013 | 0.5079 | 0.1935 |
| 0.4671 | 38.46 | 1000 | 1.1034 | 0.4916 | 0.1827 |
| 0.1928 | 46.15 | 1200 | 0.9550 | 0.4551 | 0.1643 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.