modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
transformersbook/pegasus-samsum | transformersbook | 2022-02-05T17:05:28Z | 75,124 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-test
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb).
It achieves the following results on the evaluation set:
- Loss: 1.4875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7012 | 0.54 | 500 | 1.4875 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/bert-base-uncased-issues-128 | transformersbook | 2022-02-05T16:57:43Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GitHub issues dataset. The model is used in Chapter 9: Dealing with Few to No Labels in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/09_few-to-no-labels.ipynb).
It achieves the following results on the evaluation set:
- Loss: 1.2520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0949 | 1.0 | 291 | 1.7072 |
| 1.649 | 2.0 | 582 | 1.4409 |
| 1.4835 | 3.0 | 873 | 1.4099 |
| 1.3938 | 4.0 | 1164 | 1.3858 |
| 1.3326 | 5.0 | 1455 | 1.2004 |
| 1.2949 | 6.0 | 1746 | 1.2955 |
| 1.2451 | 7.0 | 2037 | 1.2682 |
| 1.1992 | 8.0 | 2328 | 1.1938 |
| 1.1784 | 9.0 | 2619 | 1.1686 |
| 1.1397 | 10.0 | 2910 | 1.2050 |
| 1.1293 | 11.0 | 3201 | 1.2058 |
| 1.1006 | 12.0 | 3492 | 1.1680 |
| 1.0835 | 13.0 | 3783 | 1.2414 |
| 1.0757 | 14.0 | 4074 | 1.1522 |
| 1.062 | 15.0 | 4365 | 1.1176 |
| 1.0535 | 16.0 | 4656 | 1.2520 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
transformersbook/distilbert-base-uncased-distilled-clinc | transformersbook | 2022-02-05T16:47:39Z | 199 | 3 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9393548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned with knowledge distillation version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9031 | 1.0 | 318 | 0.5745 | 0.7365 |
| 0.4481 | 2.0 | 636 | 0.2856 | 0.8748 |
| 0.2528 | 3.0 | 954 | 0.1798 | 0.9187 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9294 |
| 0.1416 | 5.0 | 1590 | 0.1211 | 0.9348 |
| 0.1243 | 6.0 | 1908 | 0.1116 | 0.9348 |
| 0.1133 | 7.0 | 2226 | 0.1062 | 0.9377 |
| 0.1075 | 8.0 | 2544 | 0.1035 | 0.9387 |
| 0.1039 | 9.0 | 2862 | 0.1014 | 0.9381 |
| 0.1018 | 10.0 | 3180 | 0.1005 | 0.9394 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
transformersbook/codeparrot-small | transformersbook | 2022-02-05T16:28:36Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # CodeParrot
CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). |
transformersbook/codeparrot | transformersbook | 2022-02-05T16:27:42Z | 18 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # CodeParrot
CodeParrot (large) is a 1.5B parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). |
groar/distilgpt2-finetuned-escape | groar | 2022-02-05T14:44:47Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-escape
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-escape
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail | Ayham | 2022-02-05T11:39:58Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: distilbert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
omoekan/opus-tatoeba-eng-yor | omoekan | 2022-02-05T10:15:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ## OPUS Tatoeba English-Yoruba
This model was obtained by running the script convert_marian_to_pytorch.py with the flag -m eng-yor. The original models were trained by Jörg Tiedemann using the MarianNMT library. See all available MarianMTModel models on the profile of the Helsinki NLP group.
---
- tags: translation
- source language: English
- target language: Yoruba
- dataset: opus+bt
-model: transformer-align
-pre-processing: normalization + SentencePiece (spm12k,spm12k)
-download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.zip)
-test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.test.txt)
-test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.eval.txt)
-Benchmarks
|test set|BLEU|chr-F|
|:---|:---|:---|
|Tatoeba-test.eng-yor|13.0|0.333|
--- |
ajitrajasekharan/biomedical | ajitrajasekharan | 2022-02-05T08:44:05Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language:
- {en} # Example: fr
license: mit
widget:
- text: "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]"
example_title: "Test for entity type: Disease"
- text: "Overexpression of [MASK] occurs across a wide range of cancers"
example_title: "Test for entity type: Gene"
- text: "Patients treated with [MASK] are vulnerable to infectious diseases"
example_title: "Test for entity type: Drug"
- text: "A eGFR level below [MASK] indicates chronic kidney disease"
example_title: "Test for entity type: Measure "
- text: "In the [MASK], increased daily imatinib dose induced MMR"
example_title: "Test for entity type: STUDY/TRIAL"
- text: "Paul Erdos died at [MASK]"
example_title: "Test for entity type: TIME"
inference:
parameters:
top_k: 10
tags:
- {fill-mask} # Example: audio
- exbert
---
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
HenryHXR/t5-base-finetuned-scitldr | HenryHXR | 2022-02-05T05:48:10Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-scitldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0232
- Rouge1: 35.2134
- Rouge2: 16.8919
- Rougel: 30.8442
- Rougelsum: 30.9316
- Gen Len: 18.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0533 | 1.0 | 996 | 2.0285 | 34.9774 | 16.6163 | 30.6177 | 30.7038 | 18.7981 |
| 2.0994 | 2.0 | 1992 | 2.0232 | 35.2134 | 16.8919 | 30.8442 | 30.9316 | 18.7981 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
MaggieXM/distilbert-base-uncased-finetuned-squad | MaggieXM | 2022-02-05T04:50:41Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.01 | 56 | 4.8054 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
aogara/slai_transformer | aogara | 2022-02-05T00:26:24Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # Building a HuggingFace Transformer NLP Model
## Running this Repo
|
BigSalmon/InformalToFormalLincoln20 | BigSalmon | 2022-02-04T20:56:17Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
Wordy to Concise:
Fill Missing Phrase:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln20")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln20")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
```
***
wordy: chancing upon a linux user is a rare occurrence in the present day.
Translate into Concise Text: present-day linux users are rare.
***
wordy: an interest in classical music is becoming more and more less popular.
Translate into Concise Text: classical music appreciation is dwindling.
Translate into Concise Text: waning interest in classic music persists.
Translate into Concise Text: interest in classic music is fading.
***
wordy: the ice cream was only one dollar, but it was not a good value for the size.
Translate into Concise Text: the one dollar ice cream was overpriced for its size.
Translate into Concise Text: overpriced, the one dollar ice cream was small.
***
wordy:
``` |
MarioPenguin/bert-model-english | MarioPenguin | 2022-02-04T20:12:58Z | 6 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-model-english
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1408
- Train Sparse Categorical Accuracy: 0.9512
- Validation Loss: nan
- Validation Sparse Categorical Accuracy: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.2775 | 0.8887 | nan | 0.0 | 0 |
| 0.1702 | 0.9390 | nan | 0.0 | 1 |
| 0.1300 | 0.9555 | nan | 0.0 | 2 |
| 0.1346 | 0.9544 | nan | 0.0 | 3 |
| 0.1408 | 0.9512 | nan | 0.0 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tesemnikov-av/NER-RUBERT-Per-Loc-Org | tesemnikov-av | 2022-02-04T19:40:56Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
widget:
- text: "В город Сергиев Посад приехал Курт Кобейн."
---
Fine-tuning [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model on sentences from Wiki auto annotated with PER, LOC, ORG tags [corus/WiNER](https://pypi.org/project/corus/#reference)
language: RU
NER Class:
- PER
- LOC
- ORG
license: mit
|
LenaSchmidt/distilbert-base-uncased-finetuned-squad | LenaSchmidt | 2022-02-04T19:20:11Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0325 | 1.0 | 585 | 1.7520 |
| 1.609 | 2.0 | 1170 | 1.7713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mrm8488/roberta-base-bne-finetuned-sqac-retriever | mrm8488 | 2022-02-04T17:59:07Z | 4 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 939 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 93,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
huggingtweets/loverachelle2 | huggingtweets | 2022-02-04T17:51:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/loverachelle2/1643997109994/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1371211513323749377/ABF4NRhC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LoveRachelle2</div>
<div style="text-align: center; font-size: 14px;">@loverachelle2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LoveRachelle2.
| Data | LoveRachelle2 |
| --- | --- |
| Tweets downloaded | 1440 |
| Retweets | 102 |
| Short tweets | 92 |
| Tweets kept | 1246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1liqzipo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loverachelle2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loverachelle2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
samx18/demo | samx18 | 2022-02-04T17:23:34Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | # Dummy
This is a dummy model for testing - do not use |
dkurt/wav2vec2-base-ft-keyword-spotting-int8 | dkurt | 2022-02-04T16:40:37Z | 7 | 2 | transformers | [
"transformers",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:05Z | [anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
Rolv-Arild/xls-r-300m-npsc-4 | Rolv-Arild | 2022-02-04T16:36:33Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Wer: 0.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4527 | 0.28 | 250 | 4.0144 | 1.0 |
| 3.1828 | 0.56 | 500 | 3.1369 | 1.0 |
| 2.9927 | 0.85 | 750 | 3.0183 | 1.0 |
| 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 |
| 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 |
| 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 |
| 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 |
| 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 |
| 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 |
| 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 |
| 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 |
| 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 |
| 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 |
| 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 |
| 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 |
| 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 |
| 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 |
| 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 |
| 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 |
| 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 |
| 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 |
| 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 |
| 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 |
| 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 |
| 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 |
| 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 |
| 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 |
| 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 |
| 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 |
| 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 |
| 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 |
| 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 |
| 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 |
| 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 |
| 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 |
| 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 |
| 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 |
| 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 |
| 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 |
| 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 |
| 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 |
| 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 |
| 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 |
| 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 |
| 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 |
| 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 |
| 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 |
| 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 |
| 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 |
| 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 |
| 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 |
| 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 |
| 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 |
| 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 |
| 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 |
| 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 |
| 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 |
| 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 |
| 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 |
| 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 |
| 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 |
| 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 |
| 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 |
| 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 |
| 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 |
| 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 |
| 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 |
| 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 |
| 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 |
| 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
|
groar/distilgpt2-finetuned-wikitext2 | groar | 2022-02-04T16:27:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7852 | 1.0 | 2334 | 3.6895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cahya/wav2vec2-base-turkish-cv8 | cahya | 2022-02-04T14:30:19Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.2836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 |
| 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 |
| 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 |
| 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 |
| 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 |
| 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 |
| 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 |
| 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 |
| 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 |
| 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 |
| 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 |
| 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 |
| 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 |
| 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 |
| 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 |
| 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 |
| 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 |
| 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 |
| 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 |
| 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 |
| 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 |
| 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 |
| 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 |
| 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 |
| 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 |
| 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 |
| 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 |
| 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 |
| 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 |
| 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 |
| 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 |
| 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 |
| 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 |
| 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 |
| 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 |
| 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 |
| 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 |
| 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 |
| 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 |
| 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 |
| 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 |
| 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 |
| 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 |
| 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 |
| 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 |
| 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 |
| 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 |
| 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 |
| 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 |
| 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 |
| 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 |
| 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 |
| 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 |
| 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 |
| 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 |
| 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 |
| 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 |
| 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 |
| 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 |
| 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 |
| 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 |
| 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 |
| 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 |
| 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 |
| 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
abhishek/autonlp-imdb-roberta-base-3662644 | abhishek | 2022-02-04T14:25:35Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-imdb-roberta-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb-roberta-base
co2_eq_emissions: 25.894117734124272
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Language-Media-Lab/mt5-small-jpn-ain-mt | Language-Media-Lab | 2022-02-04T14:23:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-jpn-ain-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Japanese to Ainu language.
|
Language-Media-Lab/mt5-small-ain-jpn-mt | Language-Media-Lab | 2022-02-04T13:20:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Language-Media-Lab/byt5-small-ain-jpn-mt | Language-Media-Lab | 2022-02-04T13:03:14Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ain",
"ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- ain
- ja
tags:
- translation
---
Byt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Language-Media-Lab/byt5-small-jpn-ain-mt | Language-Media-Lab | 2022-02-04T13:02:58Z | 14 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-03-02T23:29:05Z | ---
language:
- jpn
- ain
tags:
- translation
---
Byt5-small-jpn-ain-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Japanese to Ainu language.
|
huggingtweets/ir_rkp | huggingtweets | 2022-02-04T12:03:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ir_rkp/1643976228944/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1432037158072856578/a_Fty68E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riikka Purra</div>
<div style="text-align: center; font-size: 14px;">@ir_rkp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riikka Purra.
| Data | Riikka Purra |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 141 |
| Short tweets | 78 |
| Tweets kept | 3031 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0bzvgu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ir_rkp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ir_rkp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Plim/xls-r-1b-fr | Plim | 2022-02-04T11:45:21Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2464
- Wer: 0.2220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 |
| 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 |
| 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 |
| 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 |
| 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 |
| 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 |
| 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 |
| 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 |
| 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 |
| 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 |
| 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 |
| 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 |
| 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 |
| 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 |
| 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Yanzhu/bertweetfr_offensiveness | Yanzhu | 2022-02-04T11:42:54Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-02T23:29:05Z | French roBERTa-base model fine-tuned for Offensive Language Identification on COVID-19 tweets. |
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1 | Subhashini17 | 2022-02-04T11:14:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yohida/yoshida_gpt | yohida | 2022-02-04T10:13:45Z | 4 | 0 | transformers | [
"transformers",
"gpt2",
"text-generation",
"ja",
"japanese",
"gpt",
"lm",
"nlp",
"dataset:cc100",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- gpt
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
widget:
- text: "西田幾多郎は、"
---
# japanese-gpt-1b

This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b")
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=100,
min_length=100,
do_sample=True,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_word_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT) |
huggingtweets/dril-drilbot_neo-jril_bot | huggingtweets | 2022-02-04T09:52:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-drilbot_neo-jril_bot/1643968320729/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468502340634296326/gbl8-ltv_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Jril & wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@dril-drilbot_neo-jril_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Jril & wintbot_neo.
| Data | wint | Jril | wintbot_neo |
| --- | --- | --- | --- |
| Tweets downloaded | 3228 | 113 | 3241 |
| Retweets | 475 | 0 | 315 |
| Short tweets | 305 | 0 | 453 |
| Tweets kept | 2448 | 113 | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27nmrlyy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-drilbot_neo-jril_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-drilbot_neo-jril_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2 | LegolasTheElf | 2022-02-04T07:53:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Harveenchadha/indic-voice",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- Harveenchadha/indic-voice
- generated_from_trainer
model-index:
- name: Wav2Vec2_xls_r_openslr_Hi_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_openslr_Hi_V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Harveenchadha/indic-voice](https://huggingface.co/datasets/Harveenchadha/indic-voice) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- Wer: 0.3104
- Cer: 0.0958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 7.1097 | 0.48 | 300 | 0.9965 | 3.3989 | 1.0 |
| 3.0235 | 0.96 | 600 | 0.3163 | 1.3183 | 0.7977 |
| 1.1419 | 1.44 | 900 | 0.1913 | 0.6416 | 0.5543 |
| 0.8242 | 1.92 | 1200 | 0.1608 | 0.5063 | 0.4804 |
| 0.6876 | 2.56 | 1600 | 0.1387 | 0.4401 | 0.4280 |
| 0.5868 | 3.21 | 2000 | 0.1249 | 0.3940 | 0.3907 |
| 0.5285 | 3.85 | 2400 | 0.1200 | 0.3661 | 0.3763 |
| 0.5 | 4.49 | 2800 | 0.3528 | 0.3610 | 0.1136 |
| 0.4538 | 5.13 | 3200 | 0.3403 | 0.3485 | 0.1086 |
| 0.4165 | 5.77 | 3600 | 0.3335 | 0.3439 | 0.1062 |
| 0.3989 | 6.41 | 4000 | 0.3264 | 0.3340 | 0.1036 |
| 0.3679 | 7.05 | 4400 | 0.3256 | 0.3287 | 0.1013 |
| 0.3517 | 7.69 | 4800 | 0.3212 | 0.3223 | 0.1002 |
| 0.3357 | 8.33 | 5200 | 0.3173 | 0.3196 | 0.0986 |
| 0.3225 | 8.97 | 5600 | 0.3142 | 0.3177 | 0.0985 |
| 0.3057 | 9.62 | 6000 | 0.3199 | 0.3156 | 0.0975 |
| 0.2972 | 10.26 | 6400 | 0.3139 | 0.3128 | 0.0967 |
| 0.2881 | 10.9 | 6800 | 0.3184 | 0.3107 | 0.0957 |
| 0.2791 | 11.54 | 7200 | 0.3184 | 0.3104 | 0.0958 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail | Ayham | 2022-02-04T06:33:59Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlnet_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jonc/distilbert-base-uncased-finetuned-emotion | jonc | 2022-02-04T06:15:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230733583303665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8494 | 1.0 | 250 | 0.3134 | 0.907 | 0.9051 |
| 0.2504 | 2.0 | 500 | 0.2159 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Mapcar/pegasus-samsum | Mapcar | 2022-02-04T03:27:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ghofrani/common7 | ghofrani | 2022-02-04T01:32:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fa",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- fa
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: common7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# common7
This model is a fine-tuned version of [common7/checkpoint-18500](https://huggingface.co/common7/checkpoint-18500) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3448
- Wer: 0.3478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.957 | 3.29 | 500 | 2.9503 | 1.0 |
| 1.7225 | 6.58 | 1000 | 0.8860 | 0.7703 |
| 1.4907 | 9.86 | 1500 | 0.6555 | 0.6673 |
| 1.4177 | 13.16 | 2000 | 0.5784 | 0.6076 |
| 1.3425 | 16.45 | 2500 | 0.5379 | 0.5718 |
| 1.33 | 19.73 | 3000 | 0.4962 | 0.5245 |
| 1.4378 | 23.03 | 3500 | 0.4699 | 0.5098 |
| 1.1894 | 26.31 | 4000 | 0.4527 | 0.4848 |
| 1.1844 | 29.6 | 4500 | 0.4309 | 0.4651 |
| 1.1795 | 32.89 | 5000 | 0.4131 | 0.4524 |
| 1.1471 | 36.18 | 5500 | 0.4052 | 0.4435 |
| 1.1337 | 39.47 | 6000 | 0.3927 | 0.4363 |
| 1.1896 | 42.76 | 6500 | 0.3811 | 0.4254 |
| 1.1847 | 46.05 | 7000 | 0.3855 | 0.4129 |
| 0.9954 | 49.34 | 7500 | 0.3729 | 0.3981 |
| 1.0293 | 52.63 | 8000 | 0.3637 | 0.4014 |
| 1.0224 | 55.92 | 8500 | 0.3578 | 0.3885 |
| 1.012 | 59.21 | 9000 | 0.3629 | 0.3930 |
| 1.0772 | 62.5 | 9500 | 0.3635 | 0.3906 |
| 1.0344 | 65.79 | 10000 | 0.3469 | 0.3771 |
| 0.9457 | 69.08 | 10500 | 0.3435 | 0.3735 |
| 0.9307 | 72.37 | 11000 | 0.3519 | 0.3762 |
| 0.9523 | 75.65 | 11500 | 0.3443 | 0.3666 |
| 0.9523 | 78.94 | 12000 | 0.3502 | 0.3757 |
| 0.9475 | 82.24 | 12500 | 0.3509 | 0.3643 |
| 0.9971 | 85.52 | 13000 | 0.3502 | 0.3626 |
| 0.9058 | 88.81 | 13500 | 0.3472 | 0.3605 |
| 0.8922 | 92.1 | 14000 | 0.3530 | 0.3618 |
| 0.9 | 95.39 | 14500 | 0.3500 | 0.3574 |
| 0.9051 | 98.68 | 15000 | 0.3456 | 0.3535 |
| 0.9304 | 101.97 | 15500 | 0.3438 | 0.3578 |
| 0.9433 | 105.26 | 16000 | 0.3396 | 0.3530 |
| 0.8988 | 108.55 | 16500 | 0.3436 | 0.3539 |
| 0.8789 | 111.84 | 17000 | 0.3426 | 0.3516 |
| 0.8667 | 115.13 | 17500 | 0.3438 | 0.3506 |
| 0.8895 | 118.42 | 18000 | 0.3434 | 0.3503 |
| 0.8888 | 121.71 | 18500 | 0.3425 | 0.3494 |
| 0.9453 | 125.0 | 19000 | 0.3415 | 0.3480 |
| 0.9267 | 128.29 | 19500 | 0.3477 | 0.3503 |
| 0.8315 | 131.58 | 20000 | 0.3476 | 0.3505 |
| 0.8542 | 134.86 | 20500 | 0.3475 | 0.3506 |
| 0.8478 | 138.16 | 21000 | 0.3430 | 0.3481 |
| 0.8643 | 141.45 | 21500 | 0.3451 | 0.3485 |
| 0.8705 | 144.73 | 22000 | 0.3444 | 0.3474 |
| 0.9869 | 148.03 | 22500 | 0.3441 | 0.3493 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
edugp/data2vec-nlp-base | edugp | 2022-02-03T23:23:15Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"data2vec",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
model-index:
- name: data2vec-nlp-base
results: []
---
# Data2Vec NLP Base
This model was converted from `fairseq`.
The original weights can be found in https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt
Example usage:
```python
from transformers import RobertaTokenizer, Data2VecForSequenceClassification, Data2VecConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
config = Data2VecConfig.from_pretrained("edugp/data2vec-nlp-base")
model = Data2VecForSequenceClassification.from_pretrained("edugp/data2vec-nlp-base", config=config)
# Fine-tune this model
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
|
ArBert/albert-base-v2-finetuned-ner | ArBert | 2022-02-03T14:26:33Z | 22 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9301181102362205
- name: Recall
type: recall
value: 0.9376033513394334
- name: F1
type: f1
value: 0.9338457315399397
- name: Accuracy
type: accuracy
value: 0.9851613086447802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Precision: 0.9301
- Recall: 0.9376
- F1: 0.9338
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 |
| 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 |
| 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm | anuragshas | 2022-02-03T12:28:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6864
- Wer: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3322 | 14.81 | 400 | 3.7450 | 1.0 |
| 3.2662 | 29.63 | 800 | 3.2571 | 0.9996 |
| 1.6408 | 44.44 | 1200 | 0.9098 | 0.8162 |
| 1.2289 | 59.26 | 1600 | 0.6757 | 0.7099 |
| 1.0551 | 74.07 | 2000 | 0.6417 | 0.7044 |
| 0.966 | 88.89 | 2400 | 0.6365 | 0.6789 |
| 0.8713 | 103.7 | 2800 | 0.6617 | 0.6954 |
| 0.8055 | 118.52 | 3200 | 0.6371 | 0.6762 |
| 0.7489 | 133.33 | 3600 | 0.6798 | 0.6911 |
| 0.7073 | 148.15 | 4000 | 0.6567 | 0.6731 |
| 0.6609 | 162.96 | 4400 | 0.6742 | 0.6840 |
| 0.6435 | 177.78 | 4800 | 0.6862 | 0.6633 |
| 0.6282 | 192.59 | 5200 | 0.6865 | 0.6731 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
Baybars/wav2vec2-xls-r-1b-turkish | Baybars | 2022-02-03T10:09:31Z | 17 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-10500](https://huggingface.co/./checkpoint-10500) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7540
- Wer: 0.4647
- Cer: 0.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.999,0.9999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:------:|:---------------:|:------:|
| 1.0779 | 4.59 | 500 | 0.2354 | 0.8260 | 0.7395 |
| 0.7573 | 9.17 | 1000 | 0.2100 | 0.7544 | 0.6960 |
| 0.8225 | 13.76 | 1500 | 0.2021 | 0.6867 | 0.6672 |
| 0.621 | 18.35 | 2000 | 0.1874 | 0.6824 | 0.6209 |
| 0.6362 | 22.94 | 2500 | 0.1904 | 0.6712 | 0.6286 |
| 0.624 | 27.52 | 3000 | 0.1820 | 0.6940 | 0.6116 |
| 0.4781 | 32.11 | 3500 | 0.1735 | 0.6966 | 0.5989 |
| 0.5685 | 36.7 | 4000 | 0.1769 | 0.6742 | 0.5971 |
| 0.4384 | 41.28 | 4500 | 0.1767 | 0.6904 | 0.5999 |
| 0.5509 | 45.87 | 5000 | 0.1692 | 0.6734 | 0.5641 |
| 0.3665 | 50.46 | 5500 | 0.1680 | 0.7018 | 0.5662 |
| 0.3914 | 55.05 | 6000 | 0.1631 | 0.7121 | 0.5552 |
| 0.2467 | 59.63 | 6500 | 0.1563 | 0.6657 | 0.5374 |
| 0.2576 | 64.22 | 7000 | 0.1554 | 0.6920 | 0.5316 |
| 0.2711 | 68.81 | 7500 | 0.1495 | 0.6900 | 0.5176 |
| 0.2626 | 73.39 | 8000 | 0.1454 | 0.6843 | 0.5043 |
| 0.1377 | 77.98 | 8500 | 0.1470 | 0.7383 | 0.5101 |
| 0.2005 | 82.57 | 9000 | 0.1430 | 0.7228 | 0.5045 |
| 0.1355 | 87.16 | 9500 | 0.1375 | 0.7231 | 0.4869 |
| 0.0431 | 91.74 | 10000 | 0.1350 | 0.7397 | 0.4749 |
| 0.0586 | 96.33 | 10500 | 0.1339 | 0.7360 | 0.4754 |
| 0.0896 | 100.92 | 11000 | 0.7187 | 0.4885 | 0.1398 |
| 0.183 | 105.5 | 11500 | 0.7310 | 0.4838 | 0.1392 |
| 0.0963 | 110.09 | 12000 | 0.7643 | 0.4759 | 0.1362 |
| 0.0437 | 114.68 | 12500 | 0.7525 | 0.4641 | 0.1328 |
| 0.1122 | 119.27 | 13000 | 0.7535 | 0.4651 | 0.1317 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Hetarth/marian-finetuned-hi-hinglish | Hetarth | 2022-02-03T09:54:31Z | 8 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: marian-finetuned-hi-hinglish
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-hi-hinglish
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1869
- Validation Loss: 4.0607
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1869 | 4.0607 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Rajan/Nepali_Word2Vec | Rajan | 2022-02-03T08:32:41Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
license: mit
---
https://github.com/R4j4n/Nepali-Word2Vec-from-scratch
How to clone :
```
git lfs install
git clone https://huggingface.co/Rajan/Nepali_Word2Vec
``` |
Atiqah/Atiqah | Atiqah | 2022-02-03T07:04:44Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
license: artistic-2.0
---
|
pritoms/distilroberta-base-YTTranscript23 | pritoms | 2022-02-03T05:52:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-YTTranscript23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-YTTranscript23
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 2.9007 |
| No log | 2.0 | 140 | 2.9651 |
| No log | 3.0 | 210 | 2.9374 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sunitha/distilbert-base-uncased-3feb-2022-finetuned-squad | sunitha | 2022-02-03T05:06:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-3feb-2022-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-3feb-2022-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2276 | 1.0 | 5533 | 1.1641 |
| 0.9614 | 2.0 | 11066 | 1.1225 |
| 0.7769 | 3.0 | 16599 | 1.1470 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/distilgpt2-YTTranscriptTrial2 | pritoms | 2022-02-03T04:46:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-YTTranscriptTrial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-YTTranscriptTrial2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 6.0027 |
| No log | 2.0 | 140 | 5.9072 |
| No log | 3.0 | 210 | 5.8738 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/denyah_ | huggingtweets | 2022-02-03T01:43:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/denyah_/1643852632266/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484264819049959425/siOsFP3t_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Den</div>
<div style="text-align: center; font-size: 14px;">@denyah_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Den.
| Data | Den |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 464 |
| Short tweets | 795 |
| Tweets kept | 1985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3e5c08gr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @denyah_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/denyah_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Plim/xls-r-300m-lm-fr | Plim | 2022-02-02T23:29:54Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- fr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- Wer: 0.2457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.495 | 0.16 | 500 | 3.3883 | 1.0 |
| 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 |
| 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 |
| 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 |
| 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 |
| 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 |
| 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 |
| 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 |
| 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 |
| 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 |
| 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
| 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
cahya/wav2vec2-base-turkish-artificial | cahya | 2022-02-02T15:44:36Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish with Artificial Voices by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 57.60
---
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.60 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2 | arjuntheprogrammer | 2022-02-02T15:16:39Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.7614
- name: F1
type: f1
value: 0.7614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
kmfoda/staging-pegasus-gmeetsamsum | kmfoda | 2022-02-02T14:34:58Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"feature-extraction",
"summarization",
"en",
"arxiv:1912.08777",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
shaina/covid_qa_mpnet | shaina | 2022-02-02T14:33:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
---
# covid_qa_mpnet
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on our COVID-19 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2477 | 1.0 | 3895 | 0.1869 |
| 0.1838 | 2.0 | 7790 | 0.1352 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-OH | NbAiLab | 2022-02-02T06:10:42Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-300M-NPSC-OH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
navsad/navid_test_bert | navsad | 2022-02-02T04:52:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: navid_test_bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5834463254140851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# navid_test_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8149
- Matthews Correlation: 0.5834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4598 | 1.0 | 1069 | 0.4919 | 0.5314 |
| 0.3228 | 2.0 | 2138 | 0.6362 | 0.5701 |
| 0.17 | 3.0 | 3207 | 0.8149 | 0.5834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
BigSalmon/InfillFormalLincoln | BigSalmon | 2022-02-02T03:45:03Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
``` |
pdroberts/distilbert-base-uncased-finetuned-emotion | pdroberts | 2022-02-01T23:48:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mattmcclean/distilbert-base-uncased-finetuned-emotion | mattmcclean | 2022-02-01T19:48:01Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9252235175634111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2173
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.825 | 1.0 | 250 | 0.2925 | 0.915 | 0.9134 |
| 0.2444 | 2.0 | 500 | 0.2173 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
naleraphael/rasr_sample | naleraphael | 2022-02-01T18:18:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: rasr_sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rasr_sample
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3147
- Wer: 0.2676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3332 | 1.45 | 500 | 3.3031 | 1.0 |
| 2.9272 | 2.91 | 1000 | 2.9353 | 0.9970 |
| 2.0736 | 4.36 | 1500 | 1.1565 | 0.8714 |
| 1.7339 | 5.81 | 2000 | 0.7156 | 0.6688 |
| 1.5989 | 7.27 | 2500 | 0.5791 | 0.5519 |
| 1.4916 | 8.72 | 3000 | 0.5038 | 0.5169 |
| 1.4562 | 10.17 | 3500 | 0.4861 | 0.4805 |
| 1.3893 | 11.63 | 4000 | 0.4584 | 0.4761 |
| 1.3797 | 13.08 | 4500 | 0.4298 | 0.4686 |
| 1.3508 | 14.53 | 5000 | 0.4138 | 0.3744 |
| 1.3165 | 15.99 | 5500 | 0.4015 | 0.3578 |
| 1.281 | 17.44 | 6000 | 0.3883 | 0.3472 |
| 1.2682 | 18.89 | 6500 | 0.3904 | 0.3434 |
| 1.2477 | 20.35 | 7000 | 0.3726 | 0.3321 |
| 1.2364 | 21.8 | 7500 | 0.3685 | 0.3281 |
| 1.2041 | 23.26 | 8000 | 0.3597 | 0.3194 |
| 1.1901 | 24.71 | 8500 | 0.3542 | 0.3203 |
| 1.1903 | 26.16 | 9000 | 0.3500 | 0.3138 |
| 1.1677 | 27.61 | 9500 | 0.3458 | 0.3067 |
| 1.1718 | 29.07 | 10000 | 0.3595 | 0.3112 |
| 1.1562 | 30.52 | 10500 | 0.3433 | 0.3022 |
| 1.1392 | 31.97 | 11000 | 0.3440 | 0.2936 |
| 1.1258 | 33.43 | 11500 | 0.3396 | 0.2950 |
| 1.1067 | 34.88 | 12000 | 0.3379 | 0.2939 |
| 1.0953 | 36.34 | 12500 | 0.3370 | 0.2868 |
| 1.0835 | 37.79 | 13000 | 0.3317 | 0.2860 |
| 1.0772 | 39.24 | 13500 | 0.3302 | 0.2854 |
| 1.0853 | 40.7 | 14000 | 0.3265 | 0.2783 |
| 1.0689 | 42.15 | 14500 | 0.3306 | 0.2770 |
| 1.0394 | 43.6 | 15000 | 0.3233 | 0.2757 |
| 1.0581 | 45.06 | 15500 | 0.3199 | 0.2713 |
| 1.0362 | 46.51 | 16000 | 0.3154 | 0.2683 |
| 1.0406 | 47.96 | 16500 | 0.3176 | 0.2688 |
| 1.0082 | 49.42 | 17000 | 0.3149 | 0.2679 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
cahya/output | cahya | 2022-02-01T15:40:45Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1822
- Wer: 0.1423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
moussaKam/frugalscore_tiny_roberta_bert-score | moussaKam | 2022-02-01T10:50:57Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
moussaKam/frugalscore_medium_bert-base_bert-score | moussaKam | 2022-02-01T10:50:43Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
moussaKam/frugalscore_small_bert-base_bert-score | moussaKam | 2022-02-01T10:50:31Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
moussaKam/frugalscore_tiny_bert-base_bert-score | moussaKam | 2022-02-01T10:50:21Z | 4,310 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | # FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore | |
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar | MaryaAI | 2022-02-01T08:51:38Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0589
- Validation Loss: 5.3227
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0589 | 5.3227 | 0 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
vachonni/wav2vec2-large-xls-r-300m-dansk-CV-80 | vachonni | 2022-02-01T07:55:36Z | 4 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dansk-CV-80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dansk-CV-80
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Danish, using the [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6394
- eval_wer: 0.3682
- eval_runtime: 104.0466
- eval_samples_per_second: 13.359
- eval_steps_per_second: 1.672
- epoch: 21.28
- step: 2000
## Model description
ASR Danish model
## Intended uses & limitations
More information needed
## Training and evaluation data
Danish subset of [mozilla-foundation/common_voice_8_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
huggingtweets/clamtime-madramami | huggingtweets | 2022-02-01T07:09:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/clamtime-madramami/1643699341002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486460616927858690/H_L_HiW-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486839044906618880/x1Q9ED9b_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">clementine!!!! & riley, twink eliminator 🐾🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@clamtime-madramami</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from clementine!!!! & riley, twink eliminator 🐾🏳️⚧️.
| Data | clementine!!!! | riley, twink eliminator 🐾🏳️⚧️ |
| --- | --- | --- |
| Tweets downloaded | 3239 | 3247 |
| Retweets | 340 | 114 |
| Short tweets | 872 | 607 |
| Tweets kept | 2027 | 2526 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lh3p7v6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clamtime-madramami's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/clamtime-madramami')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hady/wav2vec2-base-timit-demo-colab | hady | 2022-02-01T07:01:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Priyajay/xls-r-ab-test | Priyajay | 2022-02-01T04:29:17Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
language:
- hi
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jonfd/electra-small-is-no | jonfd | 2022-01-31T23:41:45Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- is
- no
license: cc-by-4.0
datasets:
- igc
- ic3
- jonfd/ICC
- mc4
---
# Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. |
philschmid/bert-mini-sst2-distilled | philschmid | 2022-01-31T23:34:03Z | 256 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-mini-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.856651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mini-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-4_H-256_A-4](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1792
- Accuracy: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021185586235152412
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1552 | 1.0 | 66 | 1.4847 | 0.8349 |
| 0.8451 | 2.0 | 132 | 1.3495 | 0.8624 |
| 0.5864 | 3.0 | 198 | 1.2257 | 0.8532 |
| 0.4553 | 4.0 | 264 | 1.2571 | 0.8544 |
| 0.3708 | 5.0 | 330 | 1.2132 | 0.8658 |
| 0.3086 | 6.0 | 396 | 1.2370 | 0.8589 |
| 0.2701 | 7.0 | 462 | 1.1900 | 0.8635 |
| 0.246 | 8.0 | 528 | 1.1792 | 0.8567 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
paintingpeter/distilbert-base-uncased-finetuned-clinc | paintingpeter | 2022-01-31T21:55:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2831 | 0.7426 |
| 2.6244 | 2.0 | 636 | 1.8739 | 0.8335 |
| 1.5442 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.0096 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.793 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
glob-asr/wav2vec2-large-xls-r-300m-spanish-small | glob-asr | 2022-01-31T20:58:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3596
- Wer: 0.2105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1971 | 0.79 | 400 | 0.2169 | 0.2077 |
| 0.2293 | 1.58 | 800 | 0.2507 | 0.2418 |
| 0.2065 | 2.37 | 1200 | 0.2703 | 0.2459 |
| 0.1842 | 3.16 | 1600 | 0.2716 | 0.2495 |
| 0.1634 | 3.95 | 2000 | 0.2695 | 0.2510 |
| 0.1443 | 4.74 | 2400 | 0.2754 | 0.2435 |
| 0.1345 | 5.53 | 2800 | 0.3119 | 0.2654 |
| 0.1267 | 6.32 | 3200 | 0.3154 | 0.2573 |
| 0.1237 | 7.11 | 3600 | 0.3251 | 0.2666 |
| 0.1118 | 7.91 | 4000 | 0.3139 | 0.2503 |
| 0.1051 | 8.7 | 4400 | 0.3286 | 0.2573 |
| 0.0964 | 9.49 | 4800 | 0.3348 | 0.2587 |
| 0.0946 | 10.28 | 5200 | 0.3357 | 0.2587 |
| 0.0897 | 11.07 | 5600 | 0.3408 | 0.2590 |
| 0.0812 | 11.86 | 6000 | 0.3380 | 0.2560 |
| 0.079 | 12.65 | 6400 | 0.3304 | 0.2415 |
| 0.0753 | 13.44 | 6800 | 0.3557 | 0.2540 |
| 0.0717 | 14.23 | 7200 | 0.3507 | 0.2519 |
| 0.0691 | 15.02 | 7600 | 0.3554 | 0.2587 |
| 0.0626 | 15.81 | 8000 | 0.3619 | 0.2520 |
| 0.0661 | 16.6 | 8400 | 0.3609 | 0.2564 |
| 0.0582 | 17.39 | 8800 | 0.3818 | 0.2520 |
| 0.0556 | 18.18 | 9200 | 0.3685 | 0.2410 |
| 0.0515 | 18.97 | 9600 | 0.3658 | 0.2367 |
| 0.0478 | 19.76 | 10000 | 0.3701 | 0.2413 |
| 0.0486 | 20.55 | 10400 | 0.3681 | 0.2371 |
| 0.0468 | 21.34 | 10800 | 0.3607 | 0.2370 |
| 0.0452 | 22.13 | 11200 | 0.3499 | 0.2286 |
| 0.0399 | 22.92 | 11600 | 0.3647 | 0.2282 |
| 0.0393 | 23.72 | 12000 | 0.3638 | 0.2255 |
| 0.0381 | 24.51 | 12400 | 0.3359 | 0.2202 |
| 0.0332 | 25.3 | 12800 | 0.3488 | 0.2177 |
| 0.033 | 26.09 | 13200 | 0.3628 | 0.2175 |
| 0.0311 | 26.88 | 13600 | 0.3695 | 0.2195 |
| 0.0294 | 27.67 | 14000 | 0.3624 | 0.2164 |
| 0.0281 | 28.46 | 14400 | 0.3688 | 0.2113 |
| 0.0274 | 29.25 | 14800 | 0.3596 | 0.2105 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
shaina/CoQUAD_MPNet | shaina | 2022-01-31T18:22:46Z | 0 | 0 | null | [
"MPNet",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
tags:
- MPNet
license: apache-2.0
dataset:
- covid-19
---
# CoQUAD_MPNet : MPNet model for COVID-19
## Introduction
It is a state-of-the-art language model for MPNet for Covid-19 dataset with focus on post-covid.
## How to use for Deepset Haystack
```python
# Load data
from datasets import load_dataset
dataset = load_dataset("shaina/covid19")
# Haystack pipeline
!sudo apt-get install git-lfs
!git lfs install
!git clone https://huggingface.co/shaina/CoQUAD_MPNet
GIT_LFS_SKIP_SMUDGE=1
from haystack.nodes import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
reader = FARMReader(model_name_or_path="/content/drive/MyDrive/CoQUAD_MPNet", use_gpu=True)
from haystack.pipelines import ExtractiveQAPipeline
pipe = ExtractiveQAPipeline(reader, retriever)
prediction = pipe.run(
query="What is post-COVID?", params={"Retriever": {"top_k": 10}, "Reader": {"top_k": 5}}
)
from pprint import pprint
pprint(prediction)
```
---
## Authors
Shaina Raza
--- |
peter-explosion-ai/en_pipeline | peter-explosion-ai | 2022-01-31T17:04:42Z | 5 | 0 | spacy | [
"spacy",
"text-classification",
"en",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_pipeline
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `POSITIVE`, `NEGATIVE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 55.70 |
| `CATS_MICRO_P` | 58.65 |
| `CATS_MICRO_R` | 58.65 |
| `CATS_MICRO_F` | 58.65 |
| `CATS_MACRO_P` | 61.88 |
| `CATS_MACRO_R` | 58.69 |
| `CATS_MACRO_F` | 55.70 |
| `CATS_MACRO_AUC` | 63.53 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TEXTCAT_LOSS` | 3.74 | |
osanseviero/test_meta | osanseviero | 2022-01-31T15:21:09Z | 0 | 0 | spacy | [
"spacy",
"token-classification",
"license:lgpl-lr",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
languages:
- fr
license: lgpl-lr
other-thing: test
---
|
huggingtweets/tks | huggingtweets | 2022-01-31T10:20:15Z | 0 | 0 | null | [
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tks/1643624411056/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1044664291050344449/vKKJxtBF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen</div>
<div style="text-align: center; font-size: 14px;">@tks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen.
| Data | 高須正和@NT深圳コミュニティ/TAKASU@NT Shenzhen |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1831 |
| Short tweets | 825 |
| Tweets kept | 592 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lg0mgsp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/j1ak5d5p) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/j1ak5d5p/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/goando-tsuchinao83-za09313103 | huggingtweets | 2022-01-31T09:56:33Z | 0 | 0 | null | [
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/goando-tsuchinao83-za09313103/1643622988627/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/715665333218979842/fLLzpFee_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1281544202627674112/zglo72WL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch</div>
<div style="text-align: center; font-size: 14px;">@goando-tsuchinao83-za09313103</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch.
| Data | 土屋尚史 / Goodpatch | Go Ando / PREDUCTS / THE GUILD | shun nozaki / Goodpatch |
| --- | --- | --- | --- |
| Tweets downloaded | 3236 | 3250 | 798 |
| Retweets | 1577 | 97 | 34 |
| Short tweets | 914 | 1729 | 458 |
| Tweets kept | 745 | 1424 | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31bsh75f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-tsuchinao83-za09313103's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando-tsuchinao83-za09313103')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ggreenwald | huggingtweets | 2022-01-31T09:49:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ggreenwald/1643622558420/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1092582027994509312/cpYWuYI9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Glenn Greenwald</div>
<div style="text-align: center; font-size: 14px;">@ggreenwald</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Glenn Greenwald.
| Data | Glenn Greenwald |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 324 |
| Short tweets | 160 |
| Tweets kept | 2764 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y433olp5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ggreenwald's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/duljho5y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/duljho5y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ggreenwald')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TajMahaladeen/pokemon_gptj | TajMahaladeen | 2022-01-31T06:12:31Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
---
|
NbAiLab/xls-r-1b-npsc | NbAiLab | 2022-01-31T04:33:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:04Z | ---
license: apache-2.0
---
|
huggingtweets/alphaxchange-coinmarketcap-techcrunch | huggingtweets | 2022-01-31T01:31:27Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/alphaxchange-coinmarketcap-techcrunch/1643592683390/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475337078544248835/JRWM0Hsl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1096066608034918401/m8wnTWsX_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1469027897209987081/fCdlufKH_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CoinMarketCap & TechCrunch & AlphaExchange</div>
<div style="text-align: center; font-size: 14px;">@alphaxchange-coinmarketcap-techcrunch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CoinMarketCap & TechCrunch & AlphaExchange.
| Data | CoinMarketCap | TechCrunch | AlphaExchange |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3250 | 185 |
| Retweets | 247 | 29 | 25 |
| Short tweets | 209 | 9 | 17 |
| Tweets kept | 2793 | 3212 | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ii2008f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alphaxchange-coinmarketcap-techcrunch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28z1wzo5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28z1wzo5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alphaxchange-coinmarketcap-techcrunch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
eldor-97/MarianMix_en-10 | eldor-97 | 2022-01-30T23:25:27Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: MarianMix_en-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MarianMix_en-10
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0752
- Bleu: 14.601
- Gen Len: 45.8087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 99
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| 2.1136 | 0.44 | 500 | 2.0044 | 0.2655 | 109.0201 |
| 1.1422 | 0.89 | 1000 | 1.7516 | 1.4123 | 71.0 |
| 0.9666 | 1.33 | 1500 | 1.5219 | 3.6611 | 64.6888 |
| 0.8725 | 1.78 | 2000 | 1.3606 | 4.6539 | 77.1641 |
| 0.7655 | 2.22 | 2500 | 1.2586 | 8.3456 | 60.3837 |
| 0.7149 | 2.67 | 3000 | 1.1953 | 11.2247 | 50.5921 |
| 0.6719 | 3.11 | 3500 | 1.1541 | 10.4303 | 54.3776 |
| 0.6265 | 3.56 | 4000 | 1.1186 | 13.3231 | 48.283 |
| 0.6157 | 4.0 | 4500 | 1.0929 | 13.8467 | 46.569 |
| 0.5736 | 4.44 | 5000 | 1.0848 | 14.2731 | 45.5035 |
| 0.5683 | 4.89 | 5500 | 1.0752 | 14.601 | 45.8087 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
fgaim/t5-small-squad-v2 | fgaim | 2022-01-30T21:35:54Z | 34 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- c4
- squad
tags:
- text2text-generation
widget:
- text: "question: What is the atomic number for oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8."
- text: "question: What is the chemical symbol of Oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8."
license: apache-2.0
---
T5-small for QA
---
[Google's T5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) pre-trained on the [C4](https://huggingface.co/datasets/c4) dataset, fine-tuned for Question-Answering on [SQuAD v2](https://huggingface.co/datasets/squad_v2) with the following hyperparameters:
```
optimizer=adamw_hf
learning_rate=3e-5
adam_beta1=0.9
adam_beta2=0.999
adam_epsilon=1e-08
num_train_epochs=2
per_device_train_batch_size=12
```
Usage
---
The input [context and question] has to be prepared in a specific way as follows:
```python
from transformers import pipeline
def prep_input(_context, _question):
return " ".join(["question:", _question.strip(), "context:", _context.strip()])
t5qa = pipeline("text2text-generation", "fgaim/t5-small-squad-v2")
context = """
Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O.
"""
t5qa(prep_input(context, "How many atoms combine to form dioxygen?"))
# [{'generated_text': 'two'}]
t5qa(prep_input(context, "What element makes up almost half of the earth's crust by mass?"))
# [{'generated_text': 'oxygen'}]
t5qa(prep_input(context, "What are the most abundent elements of the universe by mass?"))
# [{'generated_text': 'hydrogen and helium'}]
```
|
osama7/t5-summarization-multinews | osama7 | 2022-01-30T20:42:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:05Z | This is a t5-base model trained on the multi_news dataset for abstraction summarization |
gagan3012/xls-r-300m-hi | gagan3012 | 2022-01-30T20:39:40Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7522
- Wer: 1.0091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0417 | 2.59 | 500 | 5.1484 | 1.0 |
| 3.3722 | 5.18 | 1000 | 3.3380 | 1.0001 |
| 1.9752 | 7.77 | 1500 | 1.3910 | 1.0074 |
| 1.5868 | 10.36 | 2000 | 1.0298 | 1.0084 |
| 1.4413 | 12.95 | 2500 | 0.9313 | 1.0175 |
| 1.3296 | 15.54 | 3000 | 0.8966 | 1.0194 |
| 1.2746 | 18.13 | 3500 | 0.8875 | 1.0097 |
| 1.2147 | 20.73 | 4000 | 0.8746 | 1.0089 |
| 1.1774 | 23.32 | 4500 | 0.8383 | 1.0198 |
| 1.129 | 25.91 | 5000 | 0.7848 | 1.0167 |
| 1.0995 | 28.5 | 5500 | 0.7992 | 1.0210 |
| 1.0665 | 31.09 | 6000 | 0.7878 | 1.0107 |
| 1.0321 | 33.68 | 6500 | 0.7653 | 1.0082 |
| 1.0068 | 36.27 | 7000 | 0.7635 | 1.0065 |
| 0.9916 | 38.86 | 7500 | 0.7728 | 1.0090 |
| 0.9735 | 41.45 | 8000 | 0.7688 | 1.0070 |
| 0.9745 | 44.04 | 8500 | 0.7455 | 1.0097 |
| 0.9677 | 46.63 | 9000 | 0.7605 | 1.0099 |
| 0.9313 | 49.22 | 9500 | 0.7527 | 1.0097 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Kayvane/distilbert-complaints-product | Kayvane | 2022-01-30T19:15:13Z | 33 | 3 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:consumer_complaints",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:04Z | ---
tags:
- generated_from_trainer
datasets:
- consumer_complaints
model-index:
- name: distilbert-complaints-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-complaints-product
This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided
## Model description
A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.
## Intended uses & limitations
This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation:
- **Infrastructure:** Terraform
- **ML Ops:** HuggingFace (Datasets, Hub, Transformers)
- **Ml Explainability:** SHAP
- **Cloud:** AWS
- Model Hosting: Lambda
- DB Backend: DynamoDB
- Orchestration: Step-Functions
- UI Hosting: EC2
- Routing: API Gateway
- **UI:** Budibase
## Training and evaluation data
consumer_complaints dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
Sindhu/rembert-squad2 | Sindhu | 2022-01-30T18:35:08Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"rembert",
"question-answering",
"multilingual",
"dataset:squad2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
language:
- multilingual
tags:
- question-answering
datasets:
- squad2
metrics:
- squad2
---
# Rembert Squad2
This model is finetuned for QA task on Squad2 from [Rembert checkpoint](https://huggingface.co/google/rembert).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = "rembert"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Metrics generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 84.51107554956624,
"f1": 87.46644042781853,
"total": 11873,
"HasAns_exact": 80.97165991902834,
"HasAns_f1": 86.89086491219469,
"HasAns_total": 5928,
"NoAns_exact": 88.04037005887301,
"NoAns_f1": 88.04037005887301,
"NoAns_total": 5945
}
```
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man) |
Erfan/mT5-base_Farsi_Title_Generator | Erfan | 2022-01-30T18:00:42Z | 11 | 2 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"Title-Generation",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z | ---
language:
- fa
tags:
- Title-Generation
metrics:
- ROUGH
---
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small | tomascufaro | 2022-01-30T17:23:59Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3763
- Wer: 0.1791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2277 | 0.26 | 400 | 0.2601 | 0.2291 |
| 0.2932 | 0.53 | 800 | 0.2950 | 0.2670 |
| 0.3019 | 0.79 | 1200 | 0.3247 | 0.2766 |
| 0.2987 | 1.05 | 1600 | 0.3031 | 0.2606 |
| 0.261 | 1.32 | 2000 | 0.2994 | 0.2620 |
| 0.2651 | 1.58 | 2400 | 0.3134 | 0.2700 |
| 0.264 | 1.85 | 2800 | 0.3016 | 0.2641 |
| 0.2475 | 2.11 | 3200 | 0.3135 | 0.2661 |
| 0.2269 | 2.37 | 3600 | 0.3029 | 0.2562 |
| 0.2389 | 2.64 | 4000 | 0.3035 | 0.2549 |
| 0.2319 | 2.9 | 4400 | 0.3022 | 0.2551 |
| 0.2123 | 3.16 | 4800 | 0.3256 | 0.2638 |
| 0.2094 | 3.43 | 5200 | 0.3227 | 0.2712 |
| 0.2121 | 3.69 | 5600 | 0.3085 | 0.2596 |
| 0.207 | 3.96 | 6000 | 0.3041 | 0.2597 |
| 0.1809 | 4.22 | 6400 | 0.3122 | 0.2524 |
| 0.1846 | 4.48 | 6800 | 0.3254 | 0.2579 |
| 0.1885 | 4.75 | 7200 | 0.2958 | 0.2437 |
| 0.1923 | 5.01 | 7600 | 0.3136 | 0.2502 |
| 0.1626 | 5.27 | 8000 | 0.3059 | 0.2488 |
| 0.1704 | 5.54 | 8400 | 0.3082 | 0.2515 |
| 0.1674 | 5.8 | 8800 | 0.3196 | 0.2509 |
| 0.1691 | 6.06 | 9200 | 0.3193 | 0.25 |
| 0.1499 | 6.33 | 9600 | 0.3529 | 0.2635 |
| 0.1568 | 6.59 | 10000 | 0.3241 | 0.2481 |
| 0.1538 | 6.86 | 10400 | 0.3354 | 0.2476 |
| 0.1503 | 7.12 | 10800 | 0.3180 | 0.2402 |
| 0.136 | 7.38 | 11200 | 0.3230 | 0.2397 |
| 0.1413 | 7.65 | 11600 | 0.3178 | 0.2451 |
| 0.147 | 7.91 | 12000 | 0.3170 | 0.2389 |
| 0.1341 | 8.17 | 12400 | 0.3380 | 0.2501 |
| 0.1329 | 8.44 | 12800 | 0.3265 | 0.2414 |
| 0.1314 | 8.7 | 13200 | 0.3281 | 0.2482 |
| 0.1312 | 8.97 | 13600 | 0.3259 | 0.2539 |
| 0.12 | 9.23 | 14000 | 0.3291 | 0.2424 |
| 0.1193 | 9.49 | 14400 | 0.3302 | 0.2412 |
| 0.1189 | 9.76 | 14800 | 0.3376 | 0.2407 |
| 0.1217 | 10.02 | 15200 | 0.3334 | 0.2400 |
| 0.1118 | 10.28 | 15600 | 0.3359 | 0.2368 |
| 0.1139 | 10.55 | 16000 | 0.3239 | 0.2335 |
| 0.1106 | 10.81 | 16400 | 0.3374 | 0.2352 |
| 0.1081 | 11.07 | 16800 | 0.3585 | 0.2434 |
| 0.1063 | 11.34 | 17200 | 0.3639 | 0.2472 |
| 0.1041 | 11.6 | 17600 | 0.3399 | 0.2423 |
| 0.1062 | 11.87 | 18000 | 0.3410 | 0.2388 |
| 0.1012 | 12.13 | 18400 | 0.3597 | 0.2413 |
| 0.0953 | 12.39 | 18800 | 0.3440 | 0.2296 |
| 0.097 | 12.66 | 19200 | 0.3440 | 0.2269 |
| 0.0968 | 12.92 | 19600 | 0.3498 | 0.2333 |
| 0.0902 | 13.18 | 20000 | 0.3471 | 0.2290 |
| 0.0868 | 13.45 | 20400 | 0.3462 | 0.2266 |
| 0.0892 | 13.71 | 20800 | 0.3373 | 0.2227 |
| 0.0902 | 13.97 | 21200 | 0.3377 | 0.2240 |
| 0.0846 | 14.24 | 21600 | 0.3484 | 0.2237 |
| 0.0839 | 14.5 | 22000 | 0.3706 | 0.2260 |
| 0.0834 | 14.77 | 22400 | 0.3430 | 0.2268 |
| 0.0841 | 15.03 | 22800 | 0.3489 | 0.2259 |
| 0.076 | 15.29 | 23200 | 0.3626 | 0.2281 |
| 0.0771 | 15.56 | 23600 | 0.3624 | 0.2268 |
| 0.0773 | 15.82 | 24000 | 0.3440 | 0.2252 |
| 0.0759 | 16.08 | 24400 | 0.3532 | 0.2170 |
| 0.0745 | 16.35 | 24800 | 0.3686 | 0.2188 |
| 0.0713 | 16.61 | 25200 | 0.3691 | 0.2195 |
| 0.0718 | 16.88 | 25600 | 0.3470 | 0.2108 |
| 0.0685 | 17.14 | 26000 | 0.3756 | 0.2179 |
| 0.0689 | 17.4 | 26400 | 0.3542 | 0.2149 |
| 0.0671 | 17.67 | 26800 | 0.3461 | 0.2165 |
| 0.0737 | 17.93 | 27200 | 0.3473 | 0.2238 |
| 0.0669 | 18.19 | 27600 | 0.3441 | 0.2138 |
| 0.0629 | 18.46 | 28000 | 0.3721 | 0.2155 |
| 0.0632 | 18.72 | 28400 | 0.3667 | 0.2126 |
| 0.0647 | 18.98 | 28800 | 0.3579 | 0.2097 |
| 0.0603 | 19.25 | 29200 | 0.3670 | 0.2130 |
| 0.0604 | 19.51 | 29600 | 0.3750 | 0.2142 |
| 0.0619 | 19.78 | 30000 | 0.3804 | 0.2160 |
| 0.0603 | 20.04 | 30400 | 0.3764 | 0.2124 |
| 0.0577 | 20.3 | 30800 | 0.3858 | 0.2097 |
| 0.0583 | 20.57 | 31200 | 0.3520 | 0.2089 |
| 0.0561 | 20.83 | 31600 | 0.3615 | 0.2079 |
| 0.0545 | 21.09 | 32000 | 0.3824 | 0.2032 |
| 0.0525 | 21.36 | 32400 | 0.3858 | 0.2091 |
| 0.0524 | 21.62 | 32800 | 0.3956 | 0.2099 |
| 0.0527 | 21.89 | 33200 | 0.3667 | 0.2025 |
| 0.0514 | 22.15 | 33600 | 0.3708 | 0.2032 |
| 0.0506 | 22.41 | 34000 | 0.3815 | 0.2053 |
| 0.0478 | 22.68 | 34400 | 0.3671 | 0.2007 |
| 0.049 | 22.94 | 34800 | 0.3758 | 0.2003 |
| 0.0477 | 23.2 | 35200 | 0.3786 | 0.2014 |
| 0.045 | 23.47 | 35600 | 0.3732 | 0.1998 |
| 0.0426 | 23.73 | 36000 | 0.3737 | 0.2010 |
| 0.0444 | 23.99 | 36400 | 0.3600 | 0.1990 |
| 0.0433 | 24.26 | 36800 | 0.3689 | 0.1976 |
| 0.0442 | 24.52 | 37200 | 0.3787 | 0.1968 |
| 0.0419 | 24.79 | 37600 | 0.3652 | 0.1961 |
| 0.042 | 25.05 | 38000 | 0.3820 | 0.1964 |
| 0.0419 | 25.31 | 38400 | 0.3786 | 0.1919 |
| 0.0376 | 25.58 | 38800 | 0.3842 | 0.1934 |
| 0.0385 | 25.84 | 39200 | 0.3767 | 0.1900 |
| 0.0396 | 26.1 | 39600 | 0.3688 | 0.1888 |
| 0.0371 | 26.37 | 40000 | 0.3815 | 0.1894 |
| 0.0363 | 26.63 | 40400 | 0.3748 | 0.1878 |
| 0.0377 | 26.9 | 40800 | 0.3713 | 0.1852 |
| 0.0352 | 27.16 | 41200 | 0.3734 | 0.1851 |
| 0.0355 | 27.42 | 41600 | 0.3776 | 0.1874 |
| 0.0333 | 27.69 | 42000 | 0.3867 | 0.1841 |
| 0.0348 | 27.95 | 42400 | 0.3823 | 0.1839 |
| 0.0329 | 28.21 | 42800 | 0.3795 | 0.1822 |
| 0.0325 | 28.48 | 43200 | 0.3711 | 0.1813 |
| 0.0328 | 28.74 | 43600 | 0.3721 | 0.1781 |
| 0.0312 | 29.0 | 44000 | 0.3803 | 0.1816 |
| 0.0318 | 29.27 | 44400 | 0.3758 | 0.1794 |
| 0.0302 | 29.53 | 44800 | 0.3792 | 0.1784 |
| 0.0339 | 29.8 | 45200 | 0.3763 | 0.1791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
jiobiala24/wav2vec2-base-checkpoint-10 | jiobiala24 | 2022-01-30T16:10:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-10
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-9](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-9) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9567
- Wer: 0.3292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2892 | 1.62 | 1000 | 0.5745 | 0.3467 |
| 0.235 | 3.23 | 2000 | 0.6156 | 0.3423 |
| 0.1782 | 4.85 | 3000 | 0.6299 | 0.3484 |
| 0.1504 | 6.46 | 4000 | 0.6475 | 0.3446 |
| 0.133 | 8.08 | 5000 | 0.6753 | 0.3381 |
| 0.115 | 9.69 | 6000 | 0.7834 | 0.3529 |
| 0.101 | 11.31 | 7000 | 0.7924 | 0.3426 |
| 0.0926 | 12.92 | 8000 | 0.7887 | 0.3465 |
| 0.0863 | 14.54 | 9000 | 0.7674 | 0.3439 |
| 0.0788 | 16.16 | 10000 | 0.8648 | 0.3435 |
| 0.0728 | 17.77 | 11000 | 0.8460 | 0.3395 |
| 0.0693 | 19.39 | 12000 | 0.8941 | 0.3451 |
| 0.0637 | 21.0 | 13000 | 0.9079 | 0.3356 |
| 0.0584 | 22.62 | 14000 | 0.8851 | 0.3336 |
| 0.055 | 24.23 | 15000 | 0.9400 | 0.3338 |
| 0.0536 | 25.85 | 16000 | 0.9387 | 0.3335 |
| 0.0481 | 27.46 | 17000 | 0.9664 | 0.3337 |
| 0.0485 | 29.08 | 18000 | 0.9567 | 0.3292 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
imvladikon/charbert-bert-wiki | imvladikon | 2022-01-30T11:35:48Z | 63 | 3 | transformers | [
"transformers",
"pytorch",
"language model",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- language model
datasets:
- wikipedia
---
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
sshasnain/finetune-wav2vec2-large-xlsr-bengali | sshasnain | 2022-01-30T07:55:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:custom",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: finetune-wav2vec2-large-xlsr-bengali
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.011
---
# finetune-wav2vec2-large-xlsr-bengali
***
## Usage
*** |
pinecone/mpnet-retriever-discourse | pinecone | 2022-01-30T07:23:58Z | 4 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"question-answering",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- question-answering
---
# MPNet Retriever (Discourse)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\].
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters:
```
{'batch_size': 12}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/). |
jogonba2/mbarthez-copy_mechanism-hal_articles | jogonba2 | 2022-01-30T03:52:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mbarthez-copy_mechanism-hal_articles
results:
- task:
name: Summarization
type: summarization
metrics:
- name: Rouge1
type: rouge
value: 36.548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbarthez-davide_articles-copy_enhanced
This model is a fine-tuned version of [moussaKam/mbarthez](https://huggingface.co/moussaKam/mbarthez) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4905
- Rouge1: 36.548
- Rouge2: 19.6282
- Rougel: 30.2513
- Rougelsum: 30.2765
- Gen Len: 25.7238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6706 | 1.0 | 33552 | 1.5690 | 31.2477 | 16.5455 | 26.9855 | 26.9754 | 18.6217 |
| 1.3446 | 2.0 | 67104 | 1.5060 | 32.1108 | 17.1408 | 27.7833 | 27.7703 | 18.9115 |
| 1.3245 | 3.0 | 100656 | 1.4905 | 32.9084 | 17.7027 | 28.2912 | 28.2975 | 18.9801 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft-100sh | anton-l | 2022-01-30T02:42:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5806
- Wer: 0.3998
- Cer: 0.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5369 | 17.0 | 500 | 0.6021 | 0.6366 | 0.1727 |
| 0.3542 | 34.0 | 1000 | 0.5265 | 0.4906 | 0.1278 |
| 0.1866 | 51.0 | 1500 | 0.5805 | 0.4768 | 0.1261 |
| 0.1674 | 68.01 | 2000 | 0.5336 | 0.4518 | 0.1186 |
| 0.19 | 86.0 | 2500 | 0.5676 | 0.4427 | 0.1151 |
| 0.0815 | 103.0 | 3000 | 0.5510 | 0.4268 | 0.1125 |
| 0.0545 | 120.0 | 3500 | 0.5608 | 0.4175 | 0.1099 |
| 0.0299 | 137.01 | 4000 | 0.5875 | 0.4222 | 0.1124 |
| 0.0267 | 155.0 | 4500 | 0.5882 | 0.4026 | 0.1063 |
| 0.025 | 172.0 | 5000 | 0.5806 | 0.3998 | 0.1053 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
huggingtweets/hashimoto_lo | huggingtweets | 2022-01-30T01:43:17Z | 0 | 0 | null | [
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/hashimoto_lo/1643506993033/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/922396157493383169/LLKd_U72_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">橋下徹</div>
<div style="text-align: center; font-size: 14px;">@hashimoto_lo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 橋下徹.
| Data | 橋下徹 |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 759 |
| Short tweets | 137 |
| Tweets kept | 2351 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wi9n714/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hashimoto_lo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/240mb7l6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/240mb7l6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hashimoto_lo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/tjonthefloor | huggingtweets | 2022-01-29T22:53:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tjonthefloor/1643496777814/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466388620256948228/kkRWm2mR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ash ψ</div>
<div style="text-align: center; font-size: 14px;">@tjonthefloor</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ash ψ.
| Data | ash ψ |
| --- | --- |
| Tweets downloaded | 470 |
| Retweets | 144 |
| Short tweets | 99 |
| Tweets kept | 227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20bqlhah/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tjonthefloor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ntjhfs1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ntjhfs1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tjonthefloor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Subsets and Splits