modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
chrisvinsen/wav2vec2-17 | 2be61c71765fc104518125f7ac849d6e2239ea65 | 2022-06-01T06:05:03.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-17 | 2 | null | transformers | 26,200 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-17
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1355
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.5865 | 1.38 | 25 | 3.4717 | 1.0 |
| 2.9762 | 2.77 | 50 | 3.1483 | 1.0 |
| 2.9265 | 4.16 | 75 | 3.1946 | 1.0 |
| 2.8813 | 5.55 | 100 | 3.0504 | 1.0 |
| 2.887 | 6.93 | 125 | 3.1358 | 1.0 |
| 2.9124 | 8.33 | 150 | 3.1653 | 1.0 |
| 2.8854 | 9.71 | 175 | 3.1243 | 1.0 |
| 2.91 | 11.11 | 200 | 3.0879 | 1.0 |
| 2.8868 | 12.49 | 225 | 3.1658 | 1.0 |
| 2.8827 | 13.88 | 250 | 3.1236 | 1.0 |
| 2.911 | 15.27 | 275 | 3.1206 | 1.0 |
| 2.8829 | 16.66 | 300 | 3.1171 | 1.0 |
| 2.9105 | 18.05 | 325 | 3.1127 | 1.0 |
| 2.8845 | 19.44 | 350 | 3.1377 | 1.0 |
| 2.8803 | 20.82 | 375 | 3.1157 | 1.0 |
| 2.9102 | 22.22 | 400 | 3.1265 | 1.0 |
| 2.8803 | 23.6 | 425 | 3.1493 | 1.0 |
| 2.8837 | 24.99 | 450 | 3.1085 | 1.0 |
| 2.9106 | 26.38 | 475 | 3.1099 | 1.0 |
| 2.8787 | 27.77 | 500 | 3.1352 | 1.0 |
| 2.9132 | 29.16 | 525 | 3.1355 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
jamie613/xlmr_mask_punctuation | efbab8a2393a549b76bea6a7a385d389d065361d | 2022-06-01T05:22:42.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | jamie613 | null | jamie613/xlmr_mask_punctuation | 2 | null | transformers | 26,201 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr_mask_punctuation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_mask_punctuation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6352 | 0.05 | 500 | 1.4744 |
| 1.4623 | 0.11 | 1000 | 1.0987 |
| 1.1947 | 0.16 | 1500 | 1.1878 |
| 1.0693 | 0.21 | 2000 | 0.8077 |
| 0.9465 | 0.26 | 2500 | 0.8038 |
| 0.8394 | 0.32 | 3000 | 0.7772 |
| 0.8184 | 0.37 | 3500 | 0.8529 |
| 0.7773 | 0.42 | 4000 | 0.6255 |
| 0.7338 | 0.47 | 4500 | 0.6993 |
| 0.6935 | 0.53 | 5000 | 0.5952 |
| 0.6713 | 0.58 | 5500 | 0.5605 |
| 0.6636 | 0.63 | 6000 | 0.6588 |
| 0.6169 | 0.68 | 6500 | 0.5154 |
| 0.6045 | 0.74 | 7000 | 0.5374 |
| 0.5853 | 0.79 | 7500 | 0.5033 |
| 0.5752 | 0.84 | 8000 | 0.5002 |
| 0.5263 | 0.89 | 8500 | 0.5300 |
| 0.5512 | 0.95 | 9000 | 0.5138 |
| 0.541 | 1.0 | 9500 | 0.5160 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kimcando/para_test_4800 | 62f5c827d290caf1677733eec3ddfbeebe16cfd5 | 2022-06-01T06:38:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | kimcando | null | kimcando/para_test_4800 | 2 | null | transformers | 26,202 | Entry not found |
chrisvinsen/wav2vec2-final-1-lm-2 | 976e9ae3b031cb269a2016adb6cbd260e86e9bf1 | 2022-06-02T11:16:49.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/wav2vec2-final-1-lm-2 | 2 | null | transformers | 26,203 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-19
WER 0.283
WER 0.126 with 3-Gram
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6305
- Wer: 0.4499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 |
| 0.751 | 5.48 | 800 | 0.7155 | 0.7533 |
| 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 |
| 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 |
| 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 |
| 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 |
| 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 |
| 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 |
| 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 |
| 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 |
| 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 |
| 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 |
| 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 |
| 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 |
| 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 |
| 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 |
| 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 |
| 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 |
| 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 |
| 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 |
| 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
callmefons/t5-small-finetuned-xsum | a669158772481f93c2808804b9e86a24a589b09a | 2022-06-02T05:10:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | callmefons | null | callmefons/t5-small-finetuned-xsum | 2 | null | transformers | 26,204 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 2.8006 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ThePixOne/SeconBERTa1 | db554094a0fb22734d49147a7fb6acb54ead99cb | 2022-06-02T05:51:30.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ThePixOne | null | ThePixOne/SeconBERTa1 | 2 | null | sentence-transformers | 26,205 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 20799 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4159.8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
HIT-TMG/Dialogue-BART-base | 0ae23a90741f5154858fc3828ddba2a3f827ab29 | 2022-06-02T08:47:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | HIT-TMG | null | HIT-TMG/Dialogue-BART-base | 2 | null | transformers | 26,206 | Entry not found |
RUCAIBox/mtl-task-dialog | 8b1c1a935185d51636c538596ffc08900615a139 | 2022-06-27T02:27:39.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mtl-task-dialog | 2 | null | transformers | 26,207 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the task dialog: Belief state [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example1"
- text: "Given the task dialog: Dialogue action [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example2"
- text: "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example3"
---
# MTL-task-dialog
The MTL-task-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-task-dialog is supervised pre-trained using a mixture of labeled task-oriented system datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-task-dialog is specially designed for task-oriented system tasks, such as MultiWOZ.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-task-dialog")
>>> inputs = tokenizer(
... "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What date and time would you like to go?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
AAkhilesh/wav2vec2-large-xls-r-300m-ta-colab | d1f7f5d2f6c2846003536c258d16ca1826f53905 | 2022-06-14T20:39:54.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | AAkhilesh | null | AAkhilesh/wav2vec2-large-xls-r-300m-ta-colab | 2 | null | transformers | 26,208 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Classroom-workshop/assignment1-omar | bb3308c59bc40ca58f041be93f52d236b6372038 | 2022-06-02T15:20:42.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Classroom-workshop | null | Classroom-workshop/assignment1-omar | 2 | null | transformers | 26,209 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: wav2vec2-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.6
---
# Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 | |
Classroom-workshop/assignment1-joane | a6cb0bdf84650829517ef27391875f6b19da5780 | 2022-06-02T15:23:19.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:mit",
"model-index"
] | automatic-speech-recognition | false | Classroom-workshop | null | Classroom-workshop/assignment1-joane | 2 | null | transformers | 26,210 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
Classroom-workshop/assignment1-maria | cb88a8f85da31f52c9e2064ddc569789038d03ea | 2022-06-02T15:24:32.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"arxiv:1904.08779",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:mit",
"model-index"
] | automatic-speech-recognition | false | Classroom-workshop | null | Classroom-workshop/assignment1-maria | 2 | null | transformers | 26,211 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: s2t-small-librispeech-asr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 9.0
---
# S2T-SMALL-LIBRISPEECH-ASR
`s2t-small-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard
autoregressive cross-entropy loss and generates the transcripts autoregressively.
## Intended uses & limitations
This model can be used for end-to-end speech recognition (ASR).
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
*Note: The feature extractor depends on [torchaudio](https://github.com/pytorch/audio) and the tokenizer depends on [sentencepiece](https://github.com/google/sentencepiece)
so be sure to install those packages before running the examples.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
input_features = processor(
ds[0]["audio"]["array"],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
#### Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr)
*"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
*Result (WER)*:
| "clean" | "other" |
|:-------:|:-------:|
| 4.3 | 9.0 |
## Training data
The S2T-SMALL-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of
approximately 1000 hours of 16kHz read English speech.
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively.
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
``` |
stevemobs/deberta-base-combined-squad1-aqa-and-newsqa-1epoch | d7bac89f375eaffd42fab0dc70dec6ecf9179f84 | 2022-06-02T17:59:05.000Z | [
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | stevemobs | null | stevemobs/deberta-base-combined-squad1-aqa-and-newsqa-1epoch | 2 | null | transformers | 26,212 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-base-combined-squad1-aqa-and-newsqa-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-combined-squad1-aqa-and-newsqa-1epoch
This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6508 | 1.0 | 17307 | 0.6851 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
awghuku/wav2vec2-base-timit-demo-google-colab | 066408c8260db3220f28461d92080b3fe2ff2674 | 2022-06-02T18:35:00.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | awghuku | null | awghuku/wav2vec2-base-timit-demo-google-colab | 2 | 0 | transformers | 26,213 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4732
- Wer: 0.3300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.2982 | 1.0 | 500 | 1.3852 | 0.9990 |
| 0.8067 | 2.01 | 1000 | 0.5318 | 0.5140 |
| 0.4393 | 3.01 | 1500 | 0.4500 | 0.4570 |
| 0.3007 | 4.02 | 2000 | 0.4259 | 0.4091 |
| 0.2306 | 5.02 | 2500 | 0.4092 | 0.3962 |
| 0.1845 | 6.02 | 3000 | 0.3949 | 0.3834 |
| 0.1516 | 7.03 | 3500 | 0.4144 | 0.3759 |
| 0.1347 | 8.03 | 4000 | 0.3958 | 0.3689 |
| 0.1217 | 9.04 | 4500 | 0.4455 | 0.3754 |
| 0.1039 | 10.04 | 5000 | 0.4228 | 0.3684 |
| 0.0921 | 11.04 | 5500 | 0.4310 | 0.3566 |
| 0.082 | 12.05 | 6000 | 0.4549 | 0.3617 |
| 0.078 | 13.05 | 6500 | 0.4535 | 0.3661 |
| 0.0668 | 14.06 | 7000 | 0.4726 | 0.3557 |
| 0.0648 | 15.06 | 7500 | 0.4414 | 0.3512 |
| 0.0581 | 16.06 | 8000 | 0.4781 | 0.3548 |
| 0.057 | 17.07 | 8500 | 0.4626 | 0.3588 |
| 0.0532 | 18.07 | 9000 | 0.5065 | 0.3495 |
| 0.0442 | 19.08 | 9500 | 0.4645 | 0.3390 |
| 0.0432 | 20.08 | 10000 | 0.4786 | 0.3466 |
| 0.0416 | 21.08 | 10500 | 0.4487 | 0.3425 |
| 0.0337 | 22.09 | 11000 | 0.4878 | 0.3416 |
| 0.0305 | 23.09 | 11500 | 0.4787 | 0.3413 |
| 0.0319 | 24.1 | 12000 | 0.4707 | 0.3395 |
| 0.0262 | 25.1 | 12500 | 0.4875 | 0.3345 |
| 0.0266 | 26.1 | 13000 | 0.4801 | 0.3343 |
| 0.025 | 27.11 | 13500 | 0.4926 | 0.3320 |
| 0.022 | 28.11 | 14000 | 0.4894 | 0.3313 |
| 0.0227 | 29.12 | 14500 | 0.4732 | 0.3300 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Danastos/nq_bert_el_4 | 28ac630b0cab726b89ffe99e3448d01d91aaa570 | 2022-06-19T12:24:39.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Danastos | null | Danastos/nq_bert_el_4 | 2 | null | transformers | 26,214 | Entry not found |
erickfm/t5-large-finetuned-bias | 0799e47b772bcab4cfd1d881d2999e0f09323c4b | 2022-06-02T20:32:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WNC",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias | 2 | null | transformers | 26,215 | ---
language:
- en
license: apache-2.0
datasets:
- WNC
metrics:
- accuracy
---
This model is a fine-tune checkpoint of [T5-large](https://huggingface.co/t5-large), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of [?] on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-large).
|
symons/finetuning-sentiment-model-3000-samples | 39580f7f3ec407655e511e51220799f889023daa | 2022-06-02T23:32:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:rotten_tomatoes_movie_review",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | symons | null | symons/finetuning-sentiment-model-3000-samples | 2 | null | transformers | 26,216 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes_movie_review
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes_movie_review
type: rotten_tomatoes_movie_review
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8433333333333334
- name: F1
type: f1
value: 0.840677966101695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes_movie_review dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8692
- Accuracy: 0.8433
- F1: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
erickfm/t5-large-finetuned-bias-v2 | 1af0ee8a1850add577e837bf6f9f6772b6ce79f7 | 2022-06-02T22:58:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-large-finetuned-bias-v2 | 2 | null | transformers | 26,217 | Entry not found |
DVillada/T5_fine_tunning_NLP_test | cc4d069d07effcaf0f676e0d2ab3db2dcfe0523f | 2022-06-03T03:07:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | DVillada | null | DVillada/T5_fine_tunning_NLP_test | 2 | null | transformers | 26,218 | ---
license: cc-by-4.0
---
In this example model, I want to test how to summarize a short text due a very very small corpus of data used to train it. The data contains two columns: Text and Summary. This model was created in python through Google Colab interface, with the hugging face librarys for this task.
Diego Villada |
lewtun/t5-small-finetuned-arxiv | e09a7ddac00a57cb5a1ca757d2e15318719a24a1 | 2022-06-03T08:23:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | lewtun | null | lewtun/t5-small-finetuned-arxiv | 2 | null | transformers | 26,219 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-arxiv
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1556
- Rouge1: 37.8405
- Rouge2: 20.4483
- Rougel: 33.996
- Rougelsum: 34.0071
- Gen Len: 15.8214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 2.3825 | 1.0 | 3564 | 2.1556 | 37.8405 | 20.4483 | 33.996 | 34.0071 | 15.8214 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
unhcr/hatespeech-detection | bfbaac9b69ffb66c9d3f82382c9f7dd66b5a149a | 2022-06-03T13:27:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:unhcr-hatespeech",
"transformers",
"text classification",
"hate speech",
"offensive language",
"hatecheck"
] | text-classification | false | unhcr | null | unhcr/hatespeech-detection | 2 | 1 | transformers | 26,220 | ---
language: en
tags:
- text classification
- hate speech
- offensive language
- hatecheck
datasets:
- unhcr-hatespeech
metrics:
- f1
- hatecheck
---
Frederik Gaasdal Jensen • Henry Stoll • Sippo Rossi • Raghava Rao Mukkamala
# UNHCR Hate Speech
## Model Output
|
Worldman/t5_70_articles | 1aa05900292c0067ec85e3150e6a20c81c0c1e7f | 2022-06-03T18:50:22.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Worldman | null | Worldman/t5_70_articles | 2 | null | transformers | 26,221 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_70_articles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_70_articles
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
enteramine/bert-fa-zwnj-base-finetuned | d4723be40ae4c9c254497f724e610b03945dbdbc | 2022-06-05T15:46:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | enteramine | null | enteramine/bert-fa-zwnj-base-finetuned | 2 | null | transformers | 26,222 | Entry not found |
simecek/cDNABERT_v0 | 6fdca3f8c5212cbabf28d8dedb441046dd90da9c | 2022-06-03T21:42:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/cDNABERT_v0 | 2 | null | transformers | 26,223 | Entry not found |
VedantS01/bert-finetuned-squad | 2d24d4e1969a0d8d3cff784e35cac0ae67d75b55 | 2022-06-24T18:52:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | VedantS01 | null | VedantS01/bert-finetuned-squad | 2 | null | transformers | 26,224 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/transfer-learning-gpt | 09488e25eaa77d10d71a188066170b4609665e71 | 2022-06-04T13:30:03.000Z | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers"
] | text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/transfer-learning-gpt | 2 | null | transformers | 26,225 | Entry not found |
yanekyuk/berturk-cased-keyword-discriminator | c0370cd4b1bdfe9d768d426e7b6e48ba1a4e0a8d | 2022-06-04T18:18:17.000Z | [
"pytorch",
"bert",
"token-classification",
"tr",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | yanekyuk | null | yanekyuk/berturk-cased-keyword-discriminator | 2 | null | transformers | 26,226 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "İngiltere'de düzenlenen Avrupa Tekvando ve Para Tekvando Şampiyonası’nda millî tekvandocular 5 altın, 2 gümüş ve 4 bronz olmak üzere 11, millî para tekvandocular ise 4 altın, 3 gümüş ve 1 bronz olmak üzere 8 madalya kazanarak takım halinde Avrupa şampiyonu oldu."
- text: "Füme somon dedik ama aslında lox salamuralanmış somon anlamına geliyor, füme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasında New Yorklu Yahudi cemaati tarafından koşer bir alternatif olarak çıkan bir lezzet. Günümüzde benim hangover yüreğim dâhil dünyanın birçok yerinde enfes bir kahvaltı sandviçi."
- text: "Türkiye'de son aylarda sıklıkla tartışılan konut satışı karşılığında yabancılara vatandaşlık verilmesi konusunu beyin göçü kapsamında ele almak mümkün. Daha önce 250 bin dolar olan vatandaşlık bedeli yükselen tepkiler üzerine 400 bin dolara çıkarılmıştı. Türkiye'den göç eden iyi eğitimli kişilerin , gittikleri ülkelerde 250 bin dolar tutarında yabancı yatırıma denk olduğu göz önüne alındığında nitelikli insan gücünün yabancılara konut karşılığında satılan vatandaşlık bedelin eş olduğunu görüyoruz. Yurt dışına giden her bir vatandaşın yüksek teknolojili katma değer üreten sektörlere yapacağı katkılar göz önünde bulundurulduğunda bu açığın inşaat sektörüyle kapatıldığını da görüyoruz. Beyin göçü konusunda sadece ekonomik perspektiften bakıldığında bile kısa vadeli döviz kaynağı yaratmak için kullanılan vatandaşlık satışı yerine beyin göçünü önleyecek önlemler alınmasının ülkemize çok daha faydalı olacağı sonucunu çıkarıyoruz."
- text: "Türkiye’de resmî verilere göre, 15 ve daha yukarı yaştaki kişilerde mevsim etkisinden arındırılmış işsiz sayısı, bu yılın ilk çeyreğinde bir önceki çeyreğe göre 50 bin kişi artarak 3 milyon 845 bin kişi oldu. Mevsim etkisinden arındırılmış işsizlik oranı ise 0,1 puanlık artışla %11,4 seviyesinde gerçekleşti. İşsizlik oranı, ilk çeyrekte geçen yılın aynı çeyreğine göre 1,7 puan azaldı."
- text: "Boeing’in insansız uzay aracı Starliner, birtakım sorunlara rağmen Uluslararası Uzay İstasyonuna (ISS) ulaşarak ilk kez başarılı bir şekilde kenetlendi. Aracın ISS’te beş gün kalmasını takiben sorunsuz bir şekilde New Mexico’ya inmesi halinde Boeing, sonbaharda astronotları yörüngeye göndermek için Starliner’ı kullanabilir.\n\nNeden önemli? NASA’nın personal aracı üretmeyi durdurmasından kaynaklı olarak görevli astronotlar ve kozmonotlar, ISS’te Rusya’nın ürettiği uzay araçları ile taşınıyordu. Starliner’ın kendini kanıtlaması ise bu konuda Rusya’ya olan bağımlılığın potansiyel olarak ortadan kalkabileceği anlamına geliyor."
model-index:
- name: berturk-keyword-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berturk-keyword-discriminator
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4196
- Precision: 0.6729
- Recall: 0.6904
- Accuracy: 0.9163
- F1: 0.6815
- Ent/precision: 0.6776
- Ent/accuracy: 0.7365
- Ent/f1: 0.7058
- Con/precision: 0.6640
- Con/accuracy: 0.6151
- Con/f1: 0.6386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:|
| 0.1899 | 1.0 | 1875 | 0.1927 | 0.6330 | 0.6682 | 0.9163 | 0.6502 | 0.6283 | 0.7398 | 0.6795 | 0.6438 | 0.5513 | 0.5940 |
| 0.137 | 2.0 | 3750 | 0.1988 | 0.6405 | 0.6959 | 0.9160 | 0.6671 | 0.6461 | 0.7475 | 0.6931 | 0.6297 | 0.6116 | 0.6205 |
| 0.101 | 3.0 | 5625 | 0.2375 | 0.6494 | 0.7188 | 0.9173 | 0.6824 | 0.6497 | 0.7743 | 0.7066 | 0.6488 | 0.6281 | 0.6383 |
| 0.0767 | 4.0 | 7500 | 0.2699 | 0.6533 | 0.7188 | 0.9154 | 0.6845 | 0.6575 | 0.7741 | 0.7111 | 0.6449 | 0.6285 | 0.6366 |
| 0.057 | 5.0 | 9375 | 0.3188 | 0.6696 | 0.6914 | 0.9163 | 0.6803 | 0.6790 | 0.7405 | 0.7084 | 0.6518 | 0.6112 | 0.6308 |
| 0.0423 | 6.0 | 11250 | 0.3646 | 0.6773 | 0.6846 | 0.9171 | 0.6809 | 0.6787 | 0.7388 | 0.7075 | 0.6746 | 0.5959 | 0.6328 |
| 0.0339 | 7.0 | 13125 | 0.4007 | 0.6711 | 0.6816 | 0.9151 | 0.6763 | 0.6782 | 0.7283 | 0.7023 | 0.6575 | 0.6055 | 0.6304 |
| 0.0282 | 8.0 | 15000 | 0.4196 | 0.6729 | 0.6904 | 0.9163 | 0.6815 | 0.6776 | 0.7365 | 0.7058 | 0.6640 | 0.6151 | 0.6386 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mailenpellegrino/transformer | 10fd03980991e5923fc1fd072056886db84a10a1 | 2022-07-28T14:47:27.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | mailenpellegrino | null | mailenpellegrino/transformer | 2 | 1 | transformers | 26,227 | Entry not found |
Splend1dchan/wav2vec2-large-lv60_mt5-base_textlna_bs64 | d3e422aaa19167ec496311f39114bae0e8b3e7c6 | 2022-06-06T00:35:23.000Z | [
"pytorch",
"speechmix",
"transformers"
] | null | false | Splend1dchan | null | Splend1dchan/wav2vec2-large-lv60_mt5-base_textlna_bs64 | 2 | null | transformers | 26,228 | Entry not found |
ITESM/st_demo_2 | 2993fabf03e37df362f1eadc21d0a9fe5c916681 | 2022-06-05T04:38:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | ITESM | null | ITESM/st_demo_2 | 2 | null | sentence-transformers | 26,229 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
ITESM/st_demo_4 | 79f84408266156cb9c9a6cd07eb6e8c6fda0f3ba | 2022-06-05T04:51:06.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ITESM | null | ITESM/st_demo_4 | 2 | null | transformers | 26,230 | Entry not found |
ITESM/st_demo_5 | 4c6307639a94ea30dbf9c439d943e35fc01ecfa3 | 2022-06-05T04:55:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"sentence-transformers",
"sentence-similarity",
"license:apache-2.0"
] | sentence-similarity | false | ITESM | null | ITESM/st_demo_5 | 2 | null | sentence-transformers | 26,231 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
olpa/xml-roberta-base-finetuned-panx-fr | 96f748407d9620cc5ccf49c18711552ead8f49e4 | 2022-06-06T06:41:16.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | olpa | null | olpa/xml-roberta-base-finetuned-panx-fr | 2 | null | transformers | 26,232 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xml-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8393729984830608
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xml-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2691
- F1: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 191 | 0.3150 | 0.7993 |
| No log | 2.0 | 382 | 0.2799 | 0.8213 |
| No log | 3.0 | 573 | 0.2691 | 0.8394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Yehor/wav2vec2-xls-r-300m-uk | 99c1798441efcec65f013794ce9ffb46227521b7 | 2022-07-30T07:01:10.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers",
"license:apache-2.0"
] | null | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk | 2 | null | transformers | 26,233 | ---
license: apache-2.0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This is a pre-trained Ukrainian wav2vec2 XLS-R model with 300m parameters (dataset is 323h, source of speech is **broadcast** programs).
Steps: 400,000
The model is not intended to do inference, it's only for fine-tuning on own labeled dataset.
The model was trained from [the wav2vec2 XLS-R model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) with 300m parameters.
|
roshnir/xlmr-finetuned-mlqa-dev-es-hi | b02cd562efbed31666a05257089a96eb560f8167 | 2022-06-05T12:49:38.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | roshnir | null | roshnir/xlmr-finetuned-mlqa-dev-es-hi | 2 | null | transformers | 26,234 | Entry not found |
sayanmandal/t5-small_6_3-en-hi_en_bt | 1cbcac0317c38661df9676a9d72be055b737857a | 2022-06-06T09:44:30.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | translation | false | sayanmandal | null | sayanmandal/t5-small_6_3-en-hi_en_bt | 2 | null | transformers | 26,235 | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small_6_3-en-hi_en_bt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_6_3-en-hi_en_bt
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9293
- Bleu: 8.9676
- Gen Len: 33.391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 3.7929 | 1.0 | 526 | 2.6759 | 1.5672 | 16.749 |
| 3.1151 | 2.0 | 1052 | 2.3843 | 2.2962 | 16.5287 |
| 2.8701 | 3.0 | 1578 | 2.2287 | 2.8811 | 16.4953 |
| 2.7121 | 4.0 | 2104 | 2.1302 | 3.3949 | 16.5247 |
| 2.5844 | 5.0 | 2630 | 2.0593 | 3.8161 | 16.4513 |
| 2.4917 | 6.0 | 3156 | 2.0063 | 3.9831 | 16.4272 |
| 2.4067 | 7.0 | 3682 | 1.9733 | 4.0511 | 16.3378 |
| 2.3395 | 8.0 | 4208 | 1.9399 | 4.3067 | 16.4112 |
| 2.2713 | 9.0 | 4734 | 1.9148 | 4.3195 | 16.3618 |
| 2.2217 | 10.0 | 5260 | 1.8961 | 4.3905 | 16.4112 |
| 2.1659 | 11.0 | 5786 | 1.8787 | 4.4548 | 16.3298 |
| 2.1267 | 12.0 | 6312 | 1.8651 | 4.5779 | 16.3618 |
| 2.0793 | 13.0 | 6838 | 1.8540 | 4.4863 | 16.2603 |
| 2.0473 | 14.0 | 7364 | 1.8444 | 4.556 | 16.3044 |
| 2.0082 | 15.0 | 7890 | 1.8353 | 4.5957 | 16.3124 |
| 1.9748 | 16.0 | 8416 | 1.8313 | 4.5593 | 16.3204 |
| 1.9456 | 17.0 | 8942 | 1.8259 | 4.4522 | 16.2764 |
| 1.9177 | 18.0 | 9468 | 1.8231 | 4.3345 | 16.3084 |
| 1.8871 | 19.0 | 9994 | 1.8177 | 4.48 | 16.3458 |
| 1.8422 | 20.0 | 10520 | 1.8123 | 4.5078 | 16.287 |
| 1.8161 | 21.0 | 11046 | 1.8106 | 4.3289 | 16.3405 |
| 1.7972 | 22.0 | 11572 | 1.8106 | 4.5204 | 16.3244 |
| 1.7785 | 23.0 | 12098 | 1.8117 | 4.4651 | 16.3605 |
| 1.7563 | 24.0 | 12624 | 1.8125 | 4.3938 | 16.3538 |
| 1.7444 | 25.0 | 13150 | 1.8089 | 4.5367 | 16.3792 |
| 1.7256 | 26.0 | 13676 | 1.8075 | 4.4212 | 16.3925 |
| 1.7021 | 27.0 | 14202 | 1.8080 | 4.5491 | 16.3992 |
| 1.6969 | 28.0 | 14728 | 1.8061 | 4.6568 | 16.3645 |
| 1.6766 | 29.0 | 15254 | 1.8063 | 4.6297 | 16.3738 |
| 1.6653 | 30.0 | 15780 | 1.8095 | 4.6167 | 16.2977 |
| 1.6543 | 31.0 | 16306 | 1.8085 | 4.5452 | 16.3538 |
| 1.6413 | 32.0 | 16832 | 1.8112 | 4.6667 | 16.3351 |
| 1.6293 | 33.0 | 17358 | 1.8126 | 4.6127 | 16.3351 |
| 1.6204 | 34.0 | 17884 | 1.8115 | 4.7196 | 16.3111 |
| 1.6082 | 35.0 | 18410 | 1.8134 | 4.7011 | 16.3324 |
| 1.6048 | 36.0 | 18936 | 1.8122 | 4.6429 | 16.2964 |
| 1.5911 | 37.0 | 19462 | 1.8143 | 4.6424 | 16.3124 |
| 1.5834 | 38.0 | 19988 | 1.8131 | 4.6254 | 16.3164 |
| 1.5742 | 39.0 | 20514 | 1.8154 | 4.6998 | 16.287 |
| 1.5623 | 40.0 | 21040 | 1.8147 | 4.6469 | 16.3471 |
| 1.5599 | 41.0 | 21566 | 1.8185 | 4.6654 | 16.3231 |
| 1.5516 | 42.0 | 22092 | 1.8173 | 4.6961 | 16.3471 |
| 1.5441 | 43.0 | 22618 | 1.8180 | 4.7176 | 16.3084 |
| 1.545 | 44.0 | 23144 | 1.8177 | 4.5571 | 16.275 |
| 1.5418 | 45.0 | 23670 | 1.8195 | 4.5927 | 16.3097 |
| 1.5329 | 46.0 | 24196 | 1.8187 | 4.7025 | 16.2724 |
| 1.5348 | 47.0 | 24722 | 1.8198 | 4.6575 | 16.3057 |
| 1.5362 | 48.0 | 25248 | 1.8197 | 4.6912 | 16.2991 |
| 1.5231 | 49.0 | 25774 | 1.8202 | 4.6752 | 16.2951 |
| 1.5314 | 50.0 | 26300 | 1.8208 | 4.6114 | 16.2937 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
RayY/pegasus-samsum | 83f9c43329a681e55c5b450f7b827350f55fda5f | 2022-06-06T01:12:40.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | RayY | null | RayY/pegasus-samsum | 2 | null | transformers | 26,236 | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Chetan1997/layoutlmv2-finetuned-funsd-test | f3f17aa6b98f4671135ff60f9270c6b6618288c9 | 2022-06-06T03:20:00.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Chetan1997 | null | Chetan1997/layoutlmv2-finetuned-funsd-test | 2 | null | transformers | 26,237 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eunjin/kobart_gyeongsang_translator | bdc3882754239d780e038b002dace309eb6b07f1 | 2022-06-06T13:45:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eunjin | null | eunjin/kobart_gyeongsang_translator | 2 | null | transformers | 26,238 | Korean Dialect Translator: Standard > Gyeongsang
- Used Data : AI hub 한국어 방언 발화(경상도)
- Used Model : SKT-KoBART
- https://github.com/SKT-AI/KoBART
- Reference Code
- https://github.com/seujung/KoBART-translation
|
Nawaphong-zax/wangchanberta-base-att-spm-uncased-finetuned-cosme | 3e52ef47666694b85874cc6fe4fedc1e1514b616 | 2022-06-06T08:52:29.000Z | [
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Nawaphong-zax | null | Nawaphong-zax/wangchanberta-base-att-spm-uncased-finetuned-cosme | 2 | null | transformers | 26,239 | ---
tags:
- generated_from_trainer
model-index:
- name: wangchanberta-base-att-spm-uncased-finetuned-cosme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-base-att-spm-uncased-finetuned-cosme
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1386 | 1.0 | 391 | 1.9939 |
| 2.1301 | 2.0 | 782 | 1.9974 |
| 2.1296 | 3.0 | 1173 | 2.0013 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
imamnurby/rob2rand_merged_w_prefix_c_fc_field | 65b09c495b7251c1d1ebbe5a661ba7ad262d838c | 2022-06-06T09:40:39.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | imamnurby | null | imamnurby/rob2rand_merged_w_prefix_c_fc_field | 2 | null | transformers | 26,240 | ---
tags:
- generated_from_trainer
model-index:
- name: rob2rand_merged_w_prefix_c_fc_field
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob2rand_merged_w_prefix_c_fc_field
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.7.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear | b2e65400fd5846fee27e0c7c821633ae1306200f | 2022-06-06T12:57:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mmillet | null | mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear | 2 | null | transformers | 26,241 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4049
- Accuracy: 0.8779
- F1: 0.8775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3097 | 1.0 | 69 | 1.1369 | 0.6628 | 0.6210 |
| 0.949 | 2.0 | 138 | 0.7114 | 0.8225 | 0.8202 |
| 0.6288 | 3.0 | 207 | 0.5147 | 0.8507 | 0.8494 |
| 0.4724 | 4.0 | 276 | 0.4424 | 0.8643 | 0.8634 |
| 0.3912 | 5.0 | 345 | 0.4149 | 0.8653 | 0.8645 |
| 0.3283 | 6.0 | 414 | 0.3982 | 0.8664 | 0.8656 |
| 0.3015 | 7.0 | 483 | 0.3958 | 0.8685 | 0.8676 |
| 0.269 | 8.0 | 552 | 0.3888 | 0.8716 | 0.8712 |
| 0.2366 | 9.0 | 621 | 0.3909 | 0.8747 | 0.8742 |
| 0.2241 | 10.0 | 690 | 0.3991 | 0.8716 | 0.8707 |
| 0.1972 | 11.0 | 759 | 0.3984 | 0.8727 | 0.8720 |
| 0.1765 | 12.0 | 828 | 0.3940 | 0.8758 | 0.8753 |
| 0.1582 | 13.0 | 897 | 0.4049 | 0.8779 | 0.8775 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
galbraun/distilbert-base-uncased-finetuned-cola | 1bd7b5b7ca7c2a8a2d82917b01a0c2b2dd4c2b39 | 2022-06-06T14:20:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | galbraun | null | galbraun/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 26,242 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5517964161621091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5277
- Matthews Correlation: 0.5518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 |
| 0.3496 | 2.0 | 1070 | 0.5143 | 0.4892 |
| 0.2378 | 3.0 | 1605 | 0.5277 | 0.5518 |
| 0.1761 | 4.0 | 2140 | 0.7462 | 0.5303 |
| 0.1251 | 5.0 | 2675 | 0.7959 | 0.5414 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
asahi417/lmqg-mt5-small-esquad | 48f4291669cb2693b5da369ddd27125e3029fa5e | 2022-06-08T22:41:05.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5-small-esquad | 2 | null | transformers | 26,243 | Entry not found |
garutyunov/meme-bert | 263635824a06204278f83ff2ce00a8c6d15e9140 | 2022-06-06T16:26:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"meme classification",
"license:mit"
] | text-classification | false | garutyunov | null | garutyunov/meme-bert | 2 | null | pytorch | 26,244 | ---
language:
- en
license: mit
library_name: pytorch
tags:
- meme classification
metrics:
- accuracy
---
# MemeBERT
Bert model fine-tined with [Memes dataset](https://github.com/mrsndmn/memes-dataset) |
mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis | a868fefc01444c3241950620c0409e204ce33096 | 2022-06-06T20:28:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mmillet | null | mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis | 2 | null | transformers | 26,245 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5820
- Accuracy: 0.7881
- F1: 0.7886
- Precision: 0.7906
- Recall: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0996 | 1.0 | 69 | 1.0013 | 0.6879 | 0.6779 | 0.7070 | 0.6879 |
| 0.9524 | 2.0 | 138 | 0.8651 | 0.7265 | 0.7245 | 0.7322 | 0.7265 |
| 0.8345 | 3.0 | 207 | 0.7821 | 0.7422 | 0.7413 | 0.7445 | 0.7422 |
| 0.7573 | 4.0 | 276 | 0.7222 | 0.7484 | 0.7473 | 0.7482 | 0.7484 |
| 0.6923 | 5.0 | 345 | 0.6828 | 0.7568 | 0.7562 | 0.7562 | 0.7568 |
| 0.6412 | 6.0 | 414 | 0.6531 | 0.7568 | 0.7559 | 0.7556 | 0.7568 |
| 0.5982 | 7.0 | 483 | 0.6320 | 0.7610 | 0.7601 | 0.7597 | 0.7610 |
| 0.5593 | 8.0 | 552 | 0.6133 | 0.7651 | 0.7655 | 0.7664 | 0.7651 |
| 0.5183 | 9.0 | 621 | 0.6036 | 0.7714 | 0.7708 | 0.7709 | 0.7714 |
| 0.5042 | 10.0 | 690 | 0.5951 | 0.7756 | 0.7755 | 0.7760 | 0.7756 |
| 0.483 | 11.0 | 759 | 0.5878 | 0.7766 | 0.7768 | 0.7774 | 0.7766 |
| 0.4531 | 12.0 | 828 | 0.5855 | 0.7850 | 0.7841 | 0.7839 | 0.7850 |
| 0.4386 | 13.0 | 897 | 0.5828 | 0.7797 | 0.7790 | 0.7786 | 0.7797 |
| 0.4238 | 14.0 | 966 | 0.5788 | 0.7777 | 0.7780 | 0.7786 | 0.7777 |
| 0.4018 | 15.0 | 1035 | 0.5793 | 0.7839 | 0.7842 | 0.7855 | 0.7839 |
| 0.3998 | 16.0 | 1104 | 0.5801 | 0.7850 | 0.7844 | 0.7841 | 0.7850 |
| 0.3747 | 17.0 | 1173 | 0.5791 | 0.7839 | 0.7836 | 0.7833 | 0.7839 |
| 0.3595 | 18.0 | 1242 | 0.5799 | 0.7891 | 0.7891 | 0.7894 | 0.7891 |
| 0.3575 | 19.0 | 1311 | 0.5820 | 0.7881 | 0.7886 | 0.7906 | 0.7881 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tclong/wav2vec2-base-vios-v4 | 7c68ef810493a8328d6df350381fc60cb934ed80 | 2022-06-18T16:59:17.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:vivos_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tclong | null | tclong/wav2vec2-base-vios-v4 | 2 | null | transformers | 26,246 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- vivos_dataset
model-index:
- name: wav2vec2-base-vios-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-v4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3198
- Wer: 0.2169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 7.8138 | 0.69 | 500 | 3.5011 | 1.0 |
| 3.4372 | 1.37 | 1000 | 3.3447 | 1.0 |
| 1.9519 | 2.06 | 1500 | 0.8356 | 0.5944 |
| 0.8581 | 2.74 | 2000 | 0.5280 | 0.4038 |
| 0.6405 | 3.43 | 2500 | 0.4410 | 0.3410 |
| 0.5417 | 4.12 | 3000 | 0.3990 | 0.3140 |
| 0.4804 | 4.8 | 3500 | 0.3804 | 0.2973 |
| 0.4384 | 5.49 | 4000 | 0.3644 | 0.2808 |
| 0.4162 | 6.17 | 4500 | 0.3542 | 0.2648 |
| 0.3941 | 6.86 | 5000 | 0.3436 | 0.2529 |
| 0.3733 | 7.54 | 5500 | 0.3355 | 0.2520 |
| 0.3564 | 8.23 | 6000 | 0.3294 | 0.2415 |
| 0.3412 | 8.92 | 6500 | 0.3311 | 0.2332 |
| 0.3266 | 9.6 | 7000 | 0.3217 | 0.2325 |
| 0.3226 | 10.29 | 7500 | 0.3317 | 0.2303 |
| 0.3115 | 10.97 | 8000 | 0.3226 | 0.2279 |
| 0.3094 | 11.66 | 8500 | 0.3157 | 0.2236 |
| 0.2967 | 12.35 | 9000 | 0.3109 | 0.2202 |
| 0.2995 | 13.03 | 9500 | 0.3129 | 0.2156 |
| 0.2895 | 13.72 | 10000 | 0.3195 | 0.2146 |
| 0.3089 | 14.4 | 10500 | 0.3198 | 0.2169 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln50 | d4b0d916e766d271f4cfe77502b2c636380e8c6d | 2022-06-07T01:13:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln50 | 2 | null | transformers | 26,247 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln50")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln50")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
muchad/idt5-base | 87e68f8dbe0a73a853ed3173e26e0268898a8aef | 2022-06-07T06:16:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | muchad | null | muchad/idt5-base | 2 | null | transformers | 26,248 | Entry not found |
spy24/autotrain-expand-parrot-956131825 | c06f5f08550f43bc799a67830d596823c105903f | 2022-06-07T09:11:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autotrain-data-expand-parrot",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autotrain-expand-parrot-956131825 | 2 | null | transformers | 26,249 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- spy24/autotrain-data-expand-parrot
co2_eq_emissions: 0.647019768976749
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 956131825
- CO2 Emissions (in grams): 0.647019768976749
## Validation Metrics
- Loss: 2.330639123916626
- Rouge1: 53.3589
- Rouge2: 40.4273
- RougeL: 48.4928
- RougeLsum: 49.4952
- Gen Len: 18.8741
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/spy24/autotrain-expand-parrot-956131825
``` |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch10 | 7e20e6e54118e0ab8a8b2caac3def7073abd8970 | 2022-06-07T11:09:03.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_epoch10 | 2 | null | transformers | 26,250 | Entry not found |
erickfm/t5-small-finetuned-bias-sweep-b223c64d | 691ba68806783547556e8d01ce5e88d6f232bfde | 2022-06-07T10:57:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-small-finetuned-bias-sweep-b223c64d | 2 | null | transformers | 26,251 | Entry not found |
marieke93/MiniLM-evidence-types | afdd48fd4eb5d666bdd2d9d34e027bb1405d0b46 | 2022-06-11T13:32:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | marieke93 | null | marieke93/MiniLM-evidence-types | 2 | null | transformers | 26,252 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MiniLM-evidence-types
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-evidence-types
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the evidence types dataset.
It achieved the following results on the evaluation set:
- Loss: 1.8672
- Macro f1: 0.3726
- Weighted f1: 0.7030
- Accuracy: 0.7161
- Balanced accuracy: 0.3616
## Training and evaluation data
The data set, as well as the code that was used to fine tune this model can be found in the GitHub repository [BA-Thesis-Information-Science-Persuasion-Strategies](https://github.com/mariekevdh/BA-Thesis-Information-Science-Persuasion-Strategies)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro f1 | Weighted f1 | Accuracy | Balanced accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:-----------------:|
| 1.4106 | 1.0 | 250 | 1.2698 | 0.1966 | 0.6084 | 0.6735 | 0.2195 |
| 1.1437 | 2.0 | 500 | 1.0985 | 0.3484 | 0.6914 | 0.7116 | 0.3536 |
| 0.9714 | 3.0 | 750 | 1.0901 | 0.2606 | 0.6413 | 0.6446 | 0.2932 |
| 0.8382 | 4.0 | 1000 | 1.0197 | 0.2764 | 0.7024 | 0.7237 | 0.2783 |
| 0.7192 | 5.0 | 1250 | 1.0895 | 0.2847 | 0.6824 | 0.6963 | 0.2915 |
| 0.6249 | 6.0 | 1500 | 1.1296 | 0.3487 | 0.6888 | 0.6948 | 0.3377 |
| 0.5336 | 7.0 | 1750 | 1.1515 | 0.3591 | 0.6982 | 0.7024 | 0.3496 |
| 0.4694 | 8.0 | 2000 | 1.1962 | 0.3626 | 0.7185 | 0.7314 | 0.3415 |
| 0.4058 | 9.0 | 2250 | 1.3313 | 0.3121 | 0.6920 | 0.7085 | 0.3033 |
| 0.3746 | 10.0 | 2500 | 1.3993 | 0.3628 | 0.6976 | 0.7047 | 0.3495 |
| 0.3267 | 11.0 | 2750 | 1.5078 | 0.3560 | 0.6958 | 0.7055 | 0.3464 |
| 0.2939 | 12.0 | 3000 | 1.5875 | 0.3685 | 0.6968 | 0.7062 | 0.3514 |
| 0.2677 | 13.0 | 3250 | 1.6470 | 0.3606 | 0.6976 | 0.7070 | 0.3490 |
| 0.2425 | 14.0 | 3500 | 1.7164 | 0.3714 | 0.7069 | 0.7207 | 0.3551 |
| 0.2301 | 15.0 | 3750 | 1.8151 | 0.3597 | 0.6975 | 0.7123 | 0.3466 |
| 0.2268 | 16.0 | 4000 | 1.7838 | 0.3940 | 0.7034 | 0.7123 | 0.3869 |
| 0.201 | 17.0 | 4250 | 1.8328 | 0.3725 | 0.6964 | 0.7062 | 0.3704 |
| 0.1923 | 18.0 | 4500 | 1.8788 | 0.3708 | 0.7019 | 0.7154 | 0.3591 |
| 0.1795 | 19.0 | 4750 | 1.8574 | 0.3752 | 0.7031 | 0.7161 | 0.3619 |
| 0.1713 | 20.0 | 5000 | 1.8672 | 0.3726 | 0.7030 | 0.7161 | 0.3616 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear | b406e0e83703be73bc4e67ae4c2b41fb5e747a4d | 2022-06-07T15:52:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mmillet | null | mmillet/rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear | 2 | null | transformers | 26,253 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_best_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3902
- Accuracy: 0.8727
- F1: 0.8720
- Precision: 0.8718
- Recall: 0.8727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.3497 | 1.0 | 69 | 1.2944 | 0.5376 | 0.4665 | 0.6374 | 0.5376 |
| 1.2023 | 2.0 | 138 | 1.0370 | 0.7056 | 0.6745 | 0.7458 | 0.7056 |
| 0.9289 | 3.0 | 207 | 0.7437 | 0.8121 | 0.8082 | 0.8117 | 0.8121 |
| 0.6932 | 4.0 | 276 | 0.5717 | 0.8445 | 0.8428 | 0.8434 | 0.8445 |
| 0.5613 | 5.0 | 345 | 0.4888 | 0.8580 | 0.8572 | 0.8573 | 0.8580 |
| 0.469 | 6.0 | 414 | 0.4401 | 0.8633 | 0.8625 | 0.8623 | 0.8633 |
| 0.4176 | 7.0 | 483 | 0.4156 | 0.8653 | 0.8646 | 0.8644 | 0.8653 |
| 0.3724 | 8.0 | 552 | 0.4001 | 0.8706 | 0.8700 | 0.8699 | 0.8706 |
| 0.3427 | 9.0 | 621 | 0.3972 | 0.8706 | 0.8698 | 0.8701 | 0.8706 |
| 0.3243 | 10.0 | 690 | 0.3898 | 0.8737 | 0.8729 | 0.8736 | 0.8737 |
| 0.3039 | 11.0 | 759 | 0.3887 | 0.8716 | 0.8710 | 0.8717 | 0.8716 |
| 0.2803 | 12.0 | 828 | 0.3841 | 0.8716 | 0.8709 | 0.8709 | 0.8716 |
| 0.264 | 13.0 | 897 | 0.3872 | 0.8758 | 0.8753 | 0.8758 | 0.8758 |
| 0.2607 | 14.0 | 966 | 0.3837 | 0.8747 | 0.8743 | 0.8741 | 0.8747 |
| 0.2437 | 15.0 | 1035 | 0.3893 | 0.8716 | 0.8710 | 0.8712 | 0.8716 |
| 0.2358 | 16.0 | 1104 | 0.3867 | 0.8695 | 0.8691 | 0.8690 | 0.8695 |
| 0.2278 | 17.0 | 1173 | 0.3886 | 0.8737 | 0.8732 | 0.8732 | 0.8737 |
| 0.2143 | 18.0 | 1242 | 0.3902 | 0.8727 | 0.8720 | 0.8718 | 0.8727 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Anjoe/kant-gpt2 | 2d7b7127ab1bae90d6daa89e22f24269fc366cc1 | 2022-06-08T22:08:06.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Anjoe | null | Anjoe/kant-gpt2 | 2 | null | transformers | 26,254 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kant-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kant-gpt2
This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 22
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3257 | 1.0 | 1825 | 3.2231 |
| 2.9885 | 2.0 | 3650 | 3.0069 |
| 2.7955 | 3.0 | 5475 | 2.8440 |
| 2.5748 | 4.0 | 7300 | 2.7059 |
| 2.3545 | 5.0 | 9125 | 2.5806 |
| 2.1759 | 6.0 | 10950 | 2.4618 |
| 1.9697 | 7.0 | 12775 | 2.3553 |
| 1.7778 | 8.0 | 14600 | 2.2517 |
| 1.6192 | 9.0 | 16425 | 2.1599 |
| 1.4675 | 10.0 | 18250 | 2.0895 |
| 1.3195 | 11.0 | 20075 | 2.0138 |
| 1.2012 | 12.0 | 21900 | 1.9602 |
| 1.0828 | 13.0 | 23725 | 1.9097 |
| 0.9926 | 14.0 | 25550 | 1.8720 |
| 0.9076 | 15.0 | 27375 | 1.8426 |
| 0.8336 | 16.0 | 29200 | 1.8214 |
| 0.7649 | 17.0 | 31025 | 1.8058 |
| 0.7208 | 18.0 | 32850 | 1.7980 |
| 0.6798 | 19.0 | 34675 | 1.7938 |
| 0.647 | 20.0 | 36500 | 1.7969 |
| 0.6226 | 21.0 | 38325 | 1.7975 |
| 0.601 | 22.0 | 40150 | 1.8022 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
simecek/DNADeberta | bffad793529043b72d494b25cac5710bb5ae18e7 | 2022-06-09T20:57:28.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNADeberta | 2 | null | transformers | 26,255 | Entry not found |
zdreiosis/ff_analysis_3 | c088c1e467bf9f5ccba6c44625f087982d6486c2 | 2022-06-08T10:48:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"6th",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | zdreiosis | null | zdreiosis/ff_analysis_3 | 2 | null | transformers | 26,256 | ---
license: apache-2.0
tags:
- 6th
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ff_analysis_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ff_analysis_3
This model is a fine-tuned version of [zdreiosis/ff_analysis_2](https://huggingface.co/zdreiosis/ff_analysis_2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
- F1: 1.0
- Roc Auc: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.02 | 50 | 0.0138 | 1.0 | 1.0 | 1.0 |
| No log | 2.04 | 100 | 0.0132 | 0.9966 | 0.9966 | 0.9885 |
| No log | 3.06 | 150 | 0.0097 | 1.0 | 1.0 | 1.0 |
| No log | 4.08 | 200 | 0.0095 | 0.9966 | 0.9966 | 0.9885 |
| No log | 5.1 | 250 | 0.0096 | 1.0 | 1.0 | 1.0 |
| No log | 6.12 | 300 | 0.0079 | 1.0 | 1.0 | 1.0 |
| No log | 7.14 | 350 | 0.0070 | 1.0 | 1.0 | 1.0 |
| No log | 8.16 | 400 | 0.0069 | 1.0 | 1.0 | 1.0 |
| No log | 9.18 | 450 | 0.0065 | 1.0 | 1.0 | 1.0 |
| 0.012 | 10.2 | 500 | 0.0060 | 1.0 | 1.0 | 1.0 |
| 0.012 | 11.22 | 550 | 0.0060 | 0.9966 | 0.9966 | 0.9885 |
| 0.012 | 12.24 | 600 | 0.0054 | 1.0 | 1.0 | 1.0 |
| 0.012 | 13.27 | 650 | 0.0049 | 1.0 | 1.0 | 1.0 |
| 0.012 | 14.29 | 700 | 0.0048 | 1.0 | 1.0 | 1.0 |
| 0.012 | 15.31 | 750 | 0.0046 | 1.0 | 1.0 | 1.0 |
| 0.012 | 16.33 | 800 | 0.0042 | 1.0 | 1.0 | 1.0 |
| 0.012 | 17.35 | 850 | 0.0042 | 1.0 | 1.0 | 1.0 |
| 0.012 | 18.37 | 900 | 0.0040 | 1.0 | 1.0 | 1.0 |
| 0.012 | 19.39 | 950 | 0.0040 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 20.41 | 1000 | 0.0038 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 21.43 | 1050 | 0.0037 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 22.45 | 1100 | 0.0039 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 23.47 | 1150 | 0.0038 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 24.49 | 1200 | 0.0035 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 25.51 | 1250 | 0.0037 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 26.53 | 1300 | 0.0034 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 27.55 | 1350 | 0.0035 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 28.57 | 1400 | 0.0034 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 29.59 | 1450 | 0.0035 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
bubblecookie/t5-small-finetuned-cnndm-samsum | 6940ab472dc797a0dbeab48e2e13a3c7632ce003 | 2022-06-09T12:40:46.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | bubblecookie | null | bubblecookie/t5-small-finetuned-cnndm-samsum | 2 | null | transformers | 26,257 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.5996
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6422
- Rouge1: 24.5996
- Rouge2: 11.817
- Rougel: 20.3346
- Rougelsum: 23.2155
- Gen Len: 18.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.8078 | 1.0 | 71779 | 1.6422 | 24.5996 | 11.817 | 20.3346 | 23.2155 | 18.9999 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
khantanveera/TK | 941dacac11c63a24b4b5d77f3ee9dae08833ff6e | 2022-06-08T14:09:45.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | khantanveera | null | khantanveera/TK | 2 | null | transformers | 26,258 | Entry not found |
Sohaibsyed/wav2vec2-large-xls-r-300m-turkish-colab | 20dda811bb0026fb493fe2db8af88f3eb8219748 | 2022-06-08T20:48:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Sohaibsyed | null | Sohaibsyed/wav2vec2-large-xls-r-300m-turkish-colab | 2 | null | transformers | 26,259 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3717
- Wer: 0.2972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0139 | 3.67 | 400 | 0.7020 | 0.7112 |
| 0.4129 | 7.34 | 800 | 0.4162 | 0.4503 |
| 0.1869 | 11.01 | 1200 | 0.4174 | 0.3959 |
| 0.1273 | 14.68 | 1600 | 0.4020 | 0.3695 |
| 0.0959 | 18.35 | 2000 | 0.4026 | 0.3545 |
| 0.0771 | 22.02 | 2400 | 0.3904 | 0.3361 |
| 0.0614 | 25.69 | 2800 | 0.3736 | 0.3127 |
| 0.0486 | 29.36 | 3200 | 0.3717 | 0.2972 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
victorlee071200/distilroberta-base-finetuned-squad_v2 | 279800257cede933f37b4f8134d44309e391de8f | 2022-06-09T07:55:41.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | victorlee071200 | null | victorlee071200/distilroberta-base-finetuned-squad_v2 | 2 | null | transformers | 26,260 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad_v2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1061 | 1.0 | 8239 | 1.0501 |
| 0.8862 | 2.0 | 16478 | 1.0564 |
| 0.7547 | 3.0 | 24717 | 1.1230 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk30_epoch3 | 18a306b0ef9633dea630190ff517341cf03bba0c | 2022-06-08T19:44:00.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk30_epoch3 | 2 | null | transformers | 26,261 | Entry not found |
PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk20_epoch3 | c1946a3427c2c1d0ef4785aac87c10ce54117ae1 | 2022-06-08T21:34:56.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/samsum_reverse_train_distilbart_xsum_9-6_min10max2000_topp0.7_topk20_epoch3 | 2 | null | transformers | 26,262 | Entry not found |
DancingIguana/codeparrot-ds | 6e81a126e31bf7d2b6298de066641f13233d1d7d | 2022-06-11T16:58:04.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | DancingIguana | null | DancingIguana/codeparrot-ds | 2 | null | transformers | 26,263 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/beepunz | c6e41ced1bf2990d801e6783293b1b48f3148a02 | 2022-06-08T23:51:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/beepunz | 2 | null | transformers | 26,264 | ---
language: en
thumbnail: http://www.huggingtweets.com/beepunz/1654732293963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/942050096837005317/u5sbn8VY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BeePunz</div>
<div style="text-align: center; font-size: 14px;">@beepunz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BeePunz.
| Data | BeePunz |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1775 |
| Short tweets | 336 |
| Tweets kept | 1107 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/84kgxhyn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beepunz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2analnwj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2analnwj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beepunz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
84rry/84rry-xls-r-300M-AR | 2ddff905ed2c18bde01d75034e907b972bf0cb17 | 2022-06-12T20:54:28.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | 84rry | null | 84rry/84rry-xls-r-300M-AR | 2 | null | transformers | 26,265 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 84rry-xls-r-300M-AR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 84rry-xls-r-300M-AR
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0647
- Wer: 0.5078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1428 | 9.01 | 1000 | 0.9233 | 0.7477 |
| 0.4941 | 18.02 | 2000 | 0.7661 | 0.5633 |
| 0.3609 | 27.03 | 3000 | 0.8757 | 0.5480 |
| 0.2395 | 36.04 | 4000 | 1.0097 | 0.5275 |
| 0.1671 | 45.04 | 5000 | 1.0647 | 0.5078 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Vlasta/humandna_bert_default_beautiful_bench_4197 | 431a254b08ff91921d062b58cac873bab2bb8b23 | 2022-06-09T02:32:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_bert_default_beautiful_bench_4197 | 2 | null | transformers | 26,266 | Entry not found |
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base | 7624294da881976ce0ce2a8002d8d53aa19aa297 | 2022-06-09T11:54:52.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nestoralvaro | null | nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base | 2 | null | transformers | 26,267 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-data_prep_2021_12_26___t404_2980.csv___topic_text_google_mt5_base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.8441
- Rouge2: 0.0894
- Rougel: 0.8428
- Rougelsum: 0.844
- Gen Len: 6.338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 89332 | nan | 0.8441 | 0.0894 | 0.8428 | 0.844 | 6.338 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
inhee/kcbert-large-finetuned-unsmile | ff120306625b239454f7a5f4de9a88d56b991f23 | 2022-06-09T07:37:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | inhee | null | inhee/kcbert-large-finetuned-unsmile | 2 | null | transformers | 26,268 | ---
tags:
- generated_from_trainer
model-index:
- name: kcbert-large-finetuned-unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-large-finetuned-unsmile
This model is a fine-tuned version of [beomi/kcbert-large](https://huggingface.co/beomi/kcbert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1240
- Lrap: 0.8816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 58 | 0.2090 | 0.8098 |
| No log | 1.99 | 116 | 0.1386 | 0.8707 |
| No log | 2.99 | 174 | 0.1263 | 0.8795 |
| No log | 3.99 | 232 | 0.1232 | 0.8823 |
| No log | 4.99 | 290 | 0.1240 | 0.8816 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
ghadeermobasher/WLT-BioBERT-BC5CDR-Chemical | 27d7b7ba94a918ec3c5f0558208e4c1043b0a8e3 | 2022-06-09T11:39:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BioBERT-BC5CDR-Chemical | 2 | null | transformers | 26,269 | Entry not found |
qualitydatalab/autotrain-car-review-project-966432120 | a7c1c578c75edba0be1bd27ae85b5a7d59a8b425 | 2022-06-09T12:36:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:qualitydatalab/autotrain-data-car-review-project",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | qualitydatalab | null | qualitydatalab/autotrain-car-review-project-966432120 | 2 | 1 | transformers | 26,270 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.061185706621337065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432120
- CO2 Emissions (in grams): 0.061185706621337065
## Validation Metrics
- Loss: 0.6066656112670898
- Accuracy: 0.724822695035461
- Macro F1: 0.7077087000886584
- Micro F1: 0.7248226950354609
- Weighted F1: 0.7077087000886584
- Macro Precision: 0.7143184427227084
- Micro Precision: 0.724822695035461
- Weighted Precision: 0.7143184427227083
- Macro Recall: 0.7248226950354609
- Micro Recall: 0.724822695035461
- Weighted Recall: 0.724822695035461
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432120
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
wvangils/CTRL-Beatles-Lyrics-finetuned-newlyrics | bd20714745a4466fca5e3c00ff992686521a5aee | 2022-06-17T11:21:11.000Z | [
"pytorch",
"tensorboard",
"ctrl",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | wvangils | null | wvangils/CTRL-Beatles-Lyrics-finetuned-newlyrics | 2 | null | transformers | 26,271 | ---
tags:
- generated_from_trainer
model-index:
- name: CTRL-Beatles-Lyrics-finetuned-newlyrics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CTRL-Beatles-Lyrics-finetuned-newlyrics
This model is a fine-tuned version of [sshleifer/tiny-ctrl](https://huggingface.co/sshleifer/tiny-ctrl) on the [Cmotions - Beatles lyrics](https://huggingface.co/datasets/cmotions/Beatles_lyrics) dataset. It will complete an input prompt with Beatles-like text.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 12.361 | 1.0 | 35 | 12.3685 |
| 12.3529 | 2.0 | 70 | 12.3583 |
| 12.3374 | 3.0 | 105 | 12.3401 |
| 12.3158 | 4.0 | 140 | 12.3237 |
| 12.301 | 5.0 | 175 | 12.3180 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Vlasta/humandna_DISTILBERT_random | d8161a40ab42d3cca18ddc1b4948f85b97858f7d | 2022-06-12T17:17:50.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_DISTILBERT_random | 2 | null | transformers | 26,272 | Entry not found |
huggingtweets/midudev | 5f6674d5cedf3925d91a3fc6c2b75c70a27c3d7d | 2022-06-09T18:48:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/midudev | 2 | null | transformers | 26,273 | ---
language: en
thumbnail: http://www.huggingtweets.com/midudev/1654800505422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1526668354609680384/r85fytOs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🔴 EN DIRECTO twitch.tv/midudev</div>
<div style="text-align: center; font-size: 14px;">@midudev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🔴 EN DIRECTO twitch.tv/midudev.
| Data | 🔴 EN DIRECTO twitch.tv/midudev |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 824 |
| Short tweets | 163 |
| Tweets kept | 2259 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11iwoc6b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @midudev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/midudev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
simecek/HumanRedoneDNADeberta | 4aaa9b03453148d11589fbc83eeb7baab8cc72a0 | 2022-06-10T05:19:32.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/HumanRedoneDNADeberta | 2 | null | transformers | 26,274 | Entry not found |
25khattab/vit_test_1_95 | 2a34af2e6279891a4a9dcf97f21088010d525a87 | 2022-06-10T01:40:54.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | 25khattab | null | 25khattab/vit_test_1_95 | 2 | null | transformers | 26,275 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit_test_1_95
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9501661062240601
---
# vit_test_1_95
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
ahmeddbahaa/mt5-base-finetuned-wikilingua-ar | f2ed89c0967e3cfdc829c912c42a7907e32106d1 | 2022-06-10T13:00:43.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"ar",
"abstractive summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mt5-base-finetuned-wikilingua-ar | 2 | null | transformers | 26,276 | ---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mt5-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4936
- Rouge-1: 20.79
- Rouge-2: 7.6
- Rouge-l: 18.81
- Gen Len: 18.73
- Bertscore: 70.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
th4tkh13m/amazon_shoe_reviews | 1c141b971d33c22f7f4887e8242718e31a784b56 | 2022-06-10T08:58:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | th4tkh13m | null | th4tkh13m/amazon_shoe_reviews | 2 | null | transformers | 26,277 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: amazon_shoe_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_shoe_reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-2ndfinetune-epru | 6d3d7b8ae780ba769fe496eb581121c9f0042123 | 2022-06-10T10:52:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mmillet | null | mmillet/distilrubert-2ndfinetune-epru | 2 | null | transformers | 26,278 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-2ndfinetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-2ndfinetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3531
- Accuracy: 0.9054
- F1: 0.9034
- Precision: 0.9074
- Recall: 0.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4716 | 1.0 | 11 | 0.2851 | 0.8986 | 0.8945 | 0.9029 | 0.8986 |
| 0.2842 | 2.0 | 22 | 0.3041 | 0.8851 | 0.8796 | 0.8816 | 0.8851 |
| 0.167 | 3.0 | 33 | 0.2996 | 0.8986 | 0.8914 | 0.8997 | 0.8986 |
| 0.1527 | 4.0 | 44 | 0.2443 | 0.9189 | 0.9163 | 0.9222 | 0.9189 |
| 0.0926 | 5.0 | 55 | 0.2777 | 0.9054 | 0.9016 | 0.9059 | 0.9054 |
| 0.0897 | 6.0 | 66 | 0.3081 | 0.9122 | 0.9080 | 0.9147 | 0.9122 |
| 0.0438 | 7.0 | 77 | 0.3332 | 0.8986 | 0.8952 | 0.8993 | 0.8986 |
| 0.0433 | 8.0 | 88 | 0.3480 | 0.8851 | 0.8859 | 0.8896 | 0.8851 |
| 0.0398 | 9.0 | 99 | 0.3531 | 0.9054 | 0.9034 | 0.9074 | 0.9054 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
simecek/DNAPerceiver1_2epochs | 4a9b7019f818a7a12f5d3a00c04c4b459de5ccdf | 2022-06-13T20:40:40.000Z | [
"pytorch",
"tensorboard",
"perceiver",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | simecek | null | simecek/DNAPerceiver1_2epochs | 2 | null | transformers | 26,279 | ---
tags:
- generated_from_trainer
model-index:
- name: DNAPerceiver1_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNAPerceiver1_2epochs
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 36000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3597 | 0.3 | 6000 | 1.3565 |
| 1.3566 | 0.6 | 12000 | 1.3557 |
| 1.3514 | 0.89 | 18000 | 1.3474 |
| 1.345 | 1.19 | 24000 | 1.3410 |
| 1.3386 | 1.49 | 30000 | 1.3357 |
| 1.3348 | 1.79 | 36000 | 1.3330 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Vlasta/humandna_DEBERTAsmall_random | 696c6edfee7e7cdc613165f9bdcebf3491c88f9b | 2022-06-12T17:18:25.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/humandna_DEBERTAsmall_random | 2 | null | transformers | 26,280 | Entry not found |
binay1999/distilroberta-base-finetuned-wikitext2 | 8c39baf0f3c04553f2aa98b277d6b48a291f00d7 | 2022-06-10T13:18:33.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | binay1999 | null | binay1999/distilroberta-base-finetuned-wikitext2 | 2 | null | transformers | 26,281 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0842 | 1.0 | 2406 | 1.9219 |
| 1.9913 | 2.0 | 4812 | 1.8822 |
| 1.9596 | 3.0 | 7218 | 1.8215 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
adalbertojunior/clip-rpt | c383fcbd9cae60e085448d1be44b7045991b339f | 2022-06-10T14:35:02.000Z | [
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"dataset:ydshieh/coco_dataset_script",
"transformers",
"generated_from_trainer",
"model-index"
] | feature-extraction | false | adalbertojunior | null | adalbertojunior/clip-rpt | 2 | null | transformers | 26,282 | ---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./models/clip-roberta](https://huggingface.co/./models/clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
facebook/roberta-hate-speech-dynabench-r2-target | f6e3ad172a28bd3d85d3c3cc760be080cd929e79 | 2022-06-10T22:36:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"transformers"
] | text-classification | false | facebook | null | facebook/roberta-hate-speech-dynabench-r2-target | 2 | null | transformers | 26,283 | ---
language: en
---
# LFTW R2 Target
The R2 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
twieland/SCRATCH_ja-en_helsinki | c90aedfd6a3bccfb29fd9fa5c2c846f3b92e2d92 | 2022-06-11T23:01:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | twieland | null | twieland/SCRATCH_ja-en_helsinki | 2 | null | transformers | 26,284 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SCRATCH_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCRATCH_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5583
- Otaku Benchmark VN BLEU: 19.12
- Otaku Benchmark LN BLEU: 11.55
- Otaku Benchmark MANGA BLEU: 12.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.0252 | 0.02 | 2000 | 2.4140 |
| 2.8406 | 0.03 | 4000 | 2.2819 |
| 2.7505 | 0.05 | 6000 | 2.3018 |
| 2.6948 | 0.06 | 8000 | 2.1931 |
| 2.6408 | 0.08 | 10000 | 2.1724 |
| 2.6004 | 0.09 | 12000 | 2.1583 |
| 2.5685 | 0.11 | 14000 | 2.1203 |
| 2.5432 | 0.12 | 16000 | 2.1593 |
| 2.5153 | 0.14 | 18000 | 2.1009 |
| 2.4906 | 0.15 | 20000 | 2.0899 |
| 2.4709 | 0.17 | 22000 | 2.0512 |
| 2.4471 | 0.18 | 24000 | 2.0208 |
| 2.4295 | 0.2 | 26000 | 2.0773 |
| 2.4154 | 0.21 | 28000 | 2.0441 |
| 2.4008 | 0.23 | 30000 | 2.0235 |
| 2.3834 | 0.24 | 32000 | 2.0190 |
| 2.3709 | 0.26 | 34000 | 1.9831 |
| 2.3537 | 0.27 | 36000 | 1.9870 |
| 2.3486 | 0.29 | 38000 | 1.9692 |
| 2.3346 | 0.3 | 40000 | 1.9517 |
| 2.3195 | 0.32 | 42000 | 1.9800 |
| 2.3104 | 0.33 | 44000 | 1.9676 |
| 2.298 | 0.35 | 46000 | 1.9563 |
| 2.2905 | 0.36 | 48000 | 1.9217 |
| 2.2792 | 0.38 | 50000 | 1.9195 |
| 2.2714 | 0.39 | 52000 | 1.9109 |
| 2.2593 | 0.41 | 54000 | 1.9044 |
| 2.2582 | 0.42 | 56000 | 1.8876 |
| 2.2482 | 0.44 | 58000 | 1.8860 |
| 2.2394 | 0.45 | 60000 | 1.8887 |
| 2.2273 | 0.47 | 62000 | 1.8862 |
| 2.2255 | 0.48 | 64000 | 1.8705 |
| 2.2166 | 0.5 | 66000 | 1.8696 |
| 2.2075 | 0.51 | 68000 | 1.8657 |
| 2.1992 | 0.53 | 70000 | 1.8585 |
| 2.1969 | 0.54 | 72000 | 1.8526 |
| 2.1894 | 0.56 | 74000 | 1.8493 |
| 2.1817 | 0.57 | 76000 | 1.8480 |
| 2.1771 | 0.59 | 78000 | 1.8333 |
| 2.1683 | 0.6 | 80000 | 1.8342 |
| 2.1667 | 0.62 | 82000 | 1.8537 |
| 2.1546 | 0.63 | 84000 | 1.8261 |
| 2.1467 | 0.65 | 86000 | 1.8092 |
| 2.1421 | 0.66 | 88000 | 1.8137 |
| 2.1395 | 0.68 | 90000 | 1.8286 |
| 2.1313 | 0.69 | 92000 | 1.8042 |
| 2.1241 | 0.71 | 94000 | 1.7934 |
| 2.1214 | 0.72 | 96000 | 1.7940 |
| 2.12 | 0.74 | 98000 | 1.8064 |
| 2.1096 | 0.75 | 100000 | 1.7983 |
| 2.1035 | 0.77 | 102000 | 1.8089 |
| 2.0937 | 0.78 | 104000 | 1.7941 |
| 2.0893 | 0.8 | 106000 | 1.7791 |
| 2.0869 | 0.81 | 108000 | 1.7807 |
| 2.0845 | 0.83 | 110000 | 1.7852 |
| 2.0782 | 0.84 | 112000 | 1.7675 |
| 2.0755 | 0.86 | 114000 | 1.7756 |
| 2.0657 | 0.87 | 116000 | 1.7604 |
| 2.0614 | 0.89 | 118000 | 1.7447 |
| 2.0591 | 0.9 | 120000 | 1.7489 |
| 2.0586 | 0.92 | 122000 | 1.7550 |
| 2.0498 | 0.93 | 124000 | 1.7543 |
| 2.0455 | 0.95 | 126000 | 1.7510 |
| 2.04 | 0.96 | 128000 | 1.7439 |
| 2.0385 | 0.98 | 130000 | 1.7407 |
| 2.0267 | 0.99 | 132000 | 1.7467 |
| 2.0088 | 1.01 | 134000 | 1.7455 |
| 1.9826 | 1.02 | 136000 | 1.7210 |
| 1.9785 | 1.04 | 138000 | 1.7524 |
| 1.9777 | 1.05 | 140000 | 1.7272 |
| 1.9763 | 1.07 | 142000 | 1.7283 |
| 1.9736 | 1.08 | 144000 | 1.7210 |
| 1.9704 | 1.1 | 146000 | 1.7001 |
| 1.9625 | 1.11 | 148000 | 1.7112 |
| 1.9665 | 1.13 | 150000 | 1.7236 |
| 1.9592 | 1.14 | 152000 | 1.7169 |
| 1.9606 | 1.16 | 154000 | 1.6962 |
| 1.9571 | 1.17 | 156000 | 1.7064 |
| 1.9532 | 1.19 | 158000 | 1.6898 |
| 1.9465 | 1.2 | 160000 | 1.7004 |
| 1.9438 | 1.22 | 162000 | 1.7092 |
| 1.9435 | 1.23 | 164000 | 1.6927 |
| 1.9361 | 1.25 | 166000 | 1.6838 |
| 1.9369 | 1.26 | 168000 | 1.6784 |
| 1.9287 | 1.28 | 170000 | 1.6709 |
| 1.928 | 1.29 | 172000 | 1.6735 |
| 1.9227 | 1.31 | 174000 | 1.6689 |
| 1.9213 | 1.32 | 176000 | 1.6685 |
| 1.9152 | 1.34 | 178000 | 1.6635 |
| 1.9092 | 1.35 | 180000 | 1.6561 |
| 1.9059 | 1.37 | 182000 | 1.6673 |
| 1.9094 | 1.38 | 184000 | 1.6717 |
| 1.9006 | 1.4 | 186000 | 1.6593 |
| 1.8956 | 1.41 | 188000 | 1.6483 |
| 1.8972 | 1.43 | 190000 | 1.6635 |
| 1.8907 | 1.44 | 192000 | 1.6604 |
| 1.8885 | 1.46 | 194000 | 1.6465 |
| 1.8844 | 1.47 | 196000 | 1.6444 |
| 1.8799 | 1.49 | 198000 | 1.6307 |
| 1.8813 | 1.5 | 200000 | 1.6240 |
| 1.8693 | 1.52 | 202000 | 1.6102 |
| 1.8768 | 1.53 | 204000 | 1.6197 |
| 1.8678 | 1.55 | 206000 | 1.6275 |
| 1.8588 | 1.56 | 208000 | 1.6183 |
| 1.8585 | 1.58 | 210000 | 1.6197 |
| 1.8564 | 1.59 | 212000 | 1.6004 |
| 1.8493 | 1.61 | 214000 | 1.6078 |
| 1.85 | 1.62 | 216000 | 1.6001 |
| 1.8428 | 1.64 | 218000 | 1.6106 |
| 1.8428 | 1.65 | 220000 | 1.5866 |
| 1.8423 | 1.67 | 222000 | 1.5993 |
| 1.8352 | 1.68 | 224000 | 1.6052 |
| 1.8385 | 1.7 | 226000 | 1.5959 |
| 1.8307 | 1.71 | 228000 | 1.6024 |
| 1.8248 | 1.73 | 230000 | 1.5969 |
| 1.82 | 1.74 | 232000 | 1.5878 |
| 1.8254 | 1.76 | 234000 | 1.5934 |
| 1.8188 | 1.77 | 236000 | 1.5827 |
| 1.813 | 1.79 | 238000 | 1.5797 |
| 1.8128 | 1.8 | 240000 | 1.5758 |
| 1.8044 | 1.82 | 242000 | 1.5752 |
| 1.808 | 1.83 | 244000 | 1.5818 |
| 1.8025 | 1.85 | 246000 | 1.5772 |
| 1.7992 | 1.86 | 248000 | 1.5738 |
| 1.8021 | 1.88 | 250000 | 1.5752 |
| 1.7988 | 1.89 | 252000 | 1.5717 |
| 1.7967 | 1.91 | 254000 | 1.5690 |
| 1.7909 | 1.92 | 256000 | 1.5607 |
| 1.7942 | 1.94 | 258000 | 1.5618 |
| 1.7897 | 1.95 | 260000 | 1.5585 |
| 1.7871 | 1.97 | 262000 | 1.5576 |
| 1.7843 | 1.98 | 264000 | 1.5577 |
| 1.7888 | 2.0 | 266000 | 1.5583 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
SallyXue/DialoGPT-small-harrypotter | 2430f7eefd02901950f8927feae6136a357c7b0b | 2022-06-11T06:32:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | SallyXue | null | SallyXue/DialoGPT-small-harrypotter | 2 | null | transformers | 26,285 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
titi7242229/roberta-base-bne-finetuned_personality_multi_3 | 6cc8615b167266a68c57d9be3333f3354ce6c134 | 2022-06-11T13:13:47.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | titi7242229 | null | titi7242229/roberta-base-bne-finetuned_personality_multi_3 | 2 | null | transformers | 26,286 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned_personality_multi_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned_personality_multi_3
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1145
- Accuracy: 0.4847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2498 | 1.0 | 63 | 2.2799 | 0.2236 |
| 2.3044 | 2.0 | 126 | 2.1644 | 0.2980 |
| 1.9017 | 3.0 | 189 | 1.9934 | 0.4127 |
| 2.2281 | 4.0 | 252 | 1.8517 | 0.4501 |
| 1.2955 | 5.0 | 315 | 1.7588 | 0.4870 |
| 1.221 | 6.0 | 378 | 1.7269 | 0.4888 |
| 1.1381 | 7.0 | 441 | 1.7617 | 0.4888 |
| 0.8415 | 8.0 | 504 | 1.8101 | 0.4853 |
| 0.6696 | 9.0 | 567 | 1.8325 | 0.4928 |
| 0.6646 | 10.0 | 630 | 1.8707 | 0.4841 |
| 0.3758 | 11.0 | 693 | 1.8766 | 0.4876 |
| 0.3477 | 12.0 | 756 | 1.9171 | 0.4905 |
| 0.2854 | 13.0 | 819 | 1.9203 | 0.4980 |
| 0.2713 | 14.0 | 882 | 2.0089 | 0.4813 |
| 0.3434 | 15.0 | 945 | 2.0130 | 0.4905 |
| 0.0758 | 16.0 | 1008 | 2.0230 | 0.4922 |
| 0.2518 | 17.0 | 1071 | 2.0793 | 0.4824 |
| 0.0783 | 18.0 | 1134 | 2.0920 | 0.4830 |
| 0.0933 | 19.0 | 1197 | 2.1067 | 0.4836 |
| 0.184 | 20.0 | 1260 | 2.1145 | 0.4847 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mmillet/distilrubert-tiny-2nd-finetune-epru | db2b5583f7970b2bc52ad200b56326f0feef3874 | 2022-06-11T09:50:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | mmillet | null | mmillet/distilrubert-tiny-2nd-finetune-epru | 2 | null | transformers | 26,287 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3546
- Accuracy: 0.9325
- F1: 0.9328
- Precision: 0.9359
- Recall: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0686 | 1.0 | 12 | 0.2931 | 0.9141 | 0.9142 | 0.9163 | 0.9141 |
| 0.0269 | 2.0 | 24 | 0.2690 | 0.9448 | 0.9444 | 0.9449 | 0.9448 |
| 0.0282 | 3.0 | 36 | 0.3140 | 0.9141 | 0.9140 | 0.9168 | 0.9141 |
| 0.0185 | 4.0 | 48 | 0.2977 | 0.9571 | 0.9570 | 0.9576 | 0.9571 |
| 0.0103 | 5.0 | 60 | 0.3368 | 0.9264 | 0.9265 | 0.9296 | 0.9264 |
| 0.0088 | 6.0 | 72 | 0.3067 | 0.9387 | 0.9385 | 0.9389 | 0.9387 |
| 0.0152 | 7.0 | 84 | 0.3660 | 0.9264 | 0.9263 | 0.9282 | 0.9264 |
| 0.0315 | 8.0 | 96 | 0.3793 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
| 0.0258 | 9.0 | 108 | 0.3546 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
reaprtripr/pretrained_java_bert | d614f8938d8b2533556e09a53b74a6ae57196f34 | 2022-06-11T09:53:45.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | reaprtripr | null | reaprtripr/pretrained_java_bert | 2 | null | transformers | 26,288 | Entry not found |
huggingtweets/dekotale | 76797d728085f5d33cd8fbdd88718e83ee17daa1 | 2022-06-11T12:08:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dekotale | 2 | null | transformers | 26,289 | ---
language: en
thumbnail: http://www.huggingtweets.com/dekotale/1654949168644/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1303333944360869888/DcCZvOOS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dekotale</div>
<div style="text-align: center; font-size: 14px;">@dekotale</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dekotale.
| Data | Dekotale |
| --- | --- |
| Tweets downloaded | 3125 |
| Retweets | 1528 |
| Short tweets | 433 |
| Tweets kept | 1164 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1l1uql9a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dekotale's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fv8rmutq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fv8rmutq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dekotale')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tuni/distilbert-base-uncased-finetuned-cola | a7644cf35ffae3f238ef84e657e8a7b0b7d74bed | 2022-06-11T15:12:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tuni | null | tuni/distilbert-base-uncased-finetuned-cola | 2 | null | transformers | 26,290 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5324115893962171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7035
- Matthews Correlation: 0.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.785228097724678e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5005 | 0.4121 |
| 0.318 | 2.0 | 1070 | 0.5265 | 0.4977 |
| 0.1887 | 3.0 | 1605 | 0.7035 | 0.5324 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
aggtamv/wav2vec_2.0_feat_enc | 562795a2c7e0fd6998c8afd8be04b5cc47b225b8 | 2022-06-12T07:49:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aggtamv | null | aggtamv/wav2vec_2.0_feat_enc | 2 | null | transformers | 26,291 | Entry not found |
seomh/distilbert-base-uncased-finetuned-squad | 656c39f6450da89e03e8c441f0f54233c44bf6e4 | 2022-06-15T06:49:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | seomh | null | seomh/distilbert-base-uncased-finetuned-squad | 2 | null | transformers | 26,292 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2258 | 1.0 | 5533 | 0.0560 |
| 0.952 | 2.0 | 11066 | 0.0096 |
| 0.7492 | 3.0 | 16599 | 0.0083 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
erickfm/t5-base-finetuned-bias-sweep-c6a8795b | 9483ecde9bea97efb792dba0847f563a074a60a3 | 2022-06-11T18:18:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/t5-base-finetuned-bias-sweep-c6a8795b | 2 | null | transformers | 26,293 | Entry not found |
MyMild/bert-finetuned-squad | d1556591cdb4a2ee9b50d322dbb7afa30b710e04 | 2022-06-11T21:24:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | MyMild | null | MyMild/bert-finetuned-squad | 2 | null | transformers | 26,294 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ahmeddbahaa/arabert2arabert-finetuned-ar-wikilingua | 689b592375136c258662bac19822f5dc820a65a8 | 2022-06-12T05:51:47.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"ar",
"arabert",
"arabert2arabert",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/arabert2arabert-finetuned-ar-wikilingua | 2 | null | transformers | 26,295 | ---
tags:
- summarization
- ar
- encoder-decoder
- arabert
- arabert2arabert
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: arabert2arabert-finetuned-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arabert2arabert-finetuned-ar-wikilingua
This model is a fine-tuned version of [](https://huggingface.co/) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6877
- Rouge-1: 13.2
- Rouge-2: 3.43
- Rouge-l: 12.45
- Gen Len: 20.0
- Bertscore: 64.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 6.7667 | 1.0 | 156 | 5.3846 | 3.36 | 0.56 | 3.27 | 20.0 | 60.6 |
| 5.257 | 2.0 | 312 | 5.0424 | 5.44 | 0.88 | 5.35 | 20.0 | 60.56 |
| 4.743 | 3.0 | 468 | 4.8294 | 9.21 | 1.8 | 8.93 | 20.0 | 62.91 |
| 4.3832 | 4.0 | 624 | 4.7240 | 9.88 | 2.19 | 9.6 | 20.0 | 62.65 |
| 4.1166 | 5.0 | 780 | 4.6861 | 11.61 | 2.86 | 11.13 | 20.0 | 63.71 |
| 3.91 | 6.0 | 936 | 4.6692 | 12.27 | 3.11 | 11.76 | 20.0 | 64.07 |
| 3.7569 | 7.0 | 1092 | 4.6805 | 12.93 | 3.38 | 12.28 | 20.0 | 64.61 |
| 3.6454 | 8.0 | 1248 | 4.6877 | 13.2 | 3.43 | 12.45 | 20.0 | 64.88 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
hckhck/buda_learning | 31b9ec0d70e752011603b9641445da431b6e4cd1 | 2022-06-12T02:19:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | hckhck | null | hckhck/buda_learning | 2 | null | transformers | 26,296 | ---
license: afl-3.0
---
|
donmaclean/dfm_test | 6d60074a4c00c3d507a3d27a727ce467c257460a | 2022-06-20T12:21:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | donmaclean | null | donmaclean/dfm_test | 2 | null | transformers | 26,297 | Entry not found |
abdoutony207/m2m100_418M-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch | c4240d0d0916c58181b5d04c2d825ec35b8aac42 | 2022-06-12T10:05:13.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | abdoutony207 | null | abdoutony207/m2m100_418M-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch | 2 | null | transformers | 26,298 | Entry not found |
abdoutony207/m2m100_418M-evaluated-en-to-ar-1000instancesUNMULTI-leaningRate2e-05-batchSize8 | f3e625e0ce5d904dfdbe0cd7512bccd4ee7290bf | 2022-06-12T13:19:44.000Z | [
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | abdoutony207 | null | abdoutony207/m2m100_418M-evaluated-en-to-ar-1000instancesUNMULTI-leaningRate2e-05-batchSize8 | 2 | null | transformers | 26,299 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.