modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
quantresearch/tst_t1_base_20_1 | 3679868bb3aacb56997b36a02636aa3dad332051 | 2021-09-16T04:53:03.000Z | [
"pytorch",
"transformers"
] | null | false | quantresearch | null | quantresearch/tst_t1_base_20_1 | 0 | null | transformers | 35,900 | Entry not found |
quantresearch/tst_t1_base_20_2 | 9956f16c64a4e4582dd030229d76ffcd0665c860 | 2021-09-16T04:54:28.000Z | [
"pytorch",
"transformers"
] | null | false | quantresearch | null | quantresearch/tst_t1_base_20_2 | 0 | null | transformers | 35,901 | Entry not found |
quantresearch/tst_t2_reweight_10_2 | 76c59ea8cc9bb0c5390fcefebd5e164581006ee1 | 2021-09-16T09:36:55.000Z | [
"pytorch",
"transformers"
] | null | false | quantresearch | null | quantresearch/tst_t2_reweight_10_2 | 0 | null | transformers | 35,902 | Entry not found |
rafagudinov/ru_rent_estate_ads | 86a3803bd2c2094fbe14505edb1af26b412a8556 | 2022-01-14T00:00:40.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | rafagudinov | null | rafagudinov/ru_rent_estate_ads | 0 | null | transformers | 35,903 | Entry not found |
rafanegrette/t5_spa_gua | 097d3937eece017c1c63d0a2b3cfce5dff250f25 | 2021-11-21T17:53:33.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | rafanegrette | null | rafanegrette/t5_spa_gua | 0 | null | transformers | 35,904 | ## Translator of Spanish/Wayuunaiki with T5 model ##
This is a finetuned model based on T5 using a corpus of spanish-wayuunaiki.
Wayuunaiki is the native language of the Wayuus, the major indigenous people in the north of Colombia.
|
rafiulrumy/wav2vec2-large-xlsr-53-demo-colab | 9437b304e45ae0f9779188a7fb147c1d607f0579 | 2021-12-16T05:09:16.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rafiulrumy | null | rafiulrumy/wav2vec2-large-xlsr-53-demo-colab | 0 | null | transformers | 35,905 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7860
- Wer: 1.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.2273 | 44.42 | 400 | 3.3544 | 1.0 |
| 0.9228 | 88.84 | 800 | 4.7054 | 1.1601 |
| 0.1423 | 133.32 | 1200 | 5.9489 | 1.1578 |
| 0.0751 | 177.74 | 1600 | 5.5939 | 1.1717 |
| 0.0554 | 222.21 | 2000 | 6.1230 | 1.1717 |
| 0.0356 | 266.63 | 2400 | 6.2845 | 1.1613 |
| 0.0288 | 311.11 | 2800 | 6.6109 | 1.2100 |
| 0.0223 | 355.53 | 3200 | 6.5605 | 1.1299 |
| 0.0197 | 399.95 | 3600 | 7.1242 | 1.1682 |
| 0.0171 | 444.42 | 4000 | 7.2452 | 1.1578 |
| 0.0149 | 488.84 | 4400 | 7.4048 | 1.0684 |
| 0.0118 | 533.32 | 4800 | 6.6227 | 1.1172 |
| 0.011 | 577.74 | 5200 | 6.7909 | 1.1566 |
| 0.0095 | 622.21 | 5600 | 6.8088 | 1.1102 |
| 0.0077 | 666.63 | 6000 | 7.4451 | 1.1311 |
| 0.0062 | 711.11 | 6400 | 6.8486 | 1.0777 |
| 0.0051 | 755.53 | 6800 | 6.8812 | 1.1241 |
| 0.0051 | 799.95 | 7200 | 6.9987 | 1.1450 |
| 0.0041 | 844.42 | 7600 | 7.3048 | 1.1323 |
| 0.0044 | 888.84 | 8000 | 6.6644 | 1.1125 |
| 0.0031 | 933.32 | 8400 | 6.6298 | 1.1148 |
| 0.0027 | 977.74 | 8800 | 6.7860 | 1.1067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab | 9632da4365a55d7a913f77da1d5613b15a63fc96 | 2021-12-08T07:47:56.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rafiulrumy | null | rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab | 0 | null | transformers | 35,906 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
rahulchakwate/albert-base-finetuned-squad | 374ffc1c2128dc039d4e0aaf566e543b16d48bbc | 2021-12-14T19:03:34.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/albert-base-finetuned-squad | 0 | null | transformers | 35,907 | Entry not found |
rahulchakwate/albert-xxlarge-finetuned-squad | 1186f0c98573d7da2f9793a583e92d9bf9a64758 | 2021-12-13T04:04:51.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/albert-xxlarge-finetuned-squad | 0 | null | transformers | 35,908 | Entry not found |
rahulchakwate/bert-finetuned-squad-cased | 2a1d1c467dcc0a6343079eec8a60e718a362e4d4 | 2021-12-10T01:31:27.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/bert-finetuned-squad-cased | 0 | null | transformers | 35,909 | Entry not found |
rahulchakwate/distilbert-base-finetuned-squad | f135643a04f048abda0c2242e29d4212d534f453 | 2021-12-14T19:07:25.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/distilbert-base-finetuned-squad | 0 | null | transformers | 35,910 | Entry not found |
rahulchakwate/roberta-base-finetuned-squad | cf9c92a4f36b2b441d31e9c49d90d794d594c2ad | 2021-12-13T02:56:58.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/roberta-base-finetuned-squad | 0 | null | transformers | 35,911 | Entry not found |
rahulchakwate/roberta-large-finetuned-squad | 1bf9a766200962d996bb73f236ec7ce8552043d5 | 2021-12-14T21:10:05.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | rahulchakwate | null | rahulchakwate/roberta-large-finetuned-squad | 0 | null | transformers | 35,912 | Entry not found |
rajivratn/gupshup_e2e_mbart | 5f66adb8b3d10d5fd5d8f7f02e4c851d1ba9449d | 2021-11-06T17:42:01.000Z | [
"pytorch"
] | null | false | rajivratn | null | rajivratn/gupshup_e2e_mbart | 0 | null | null | 35,913 | Entry not found |
rajratnpranesh/DCS_sanskrit_albert | 2d6007c9028513123ee421480661a930b84aec7e | 2020-07-25T15:53:47.000Z | [
"pytorch",
"albert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rajratnpranesh | null | rajratnpranesh/DCS_sanskrit_albert | 0 | null | transformers | 35,914 | Entry not found |
ravinyu/codeparrot-small | d894cfc9317f54779b74a0fcd66cdc63f28ebba6 | 2022-01-23T06:53:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ravinyu | null | ravinyu/codeparrot-small | 0 | null | transformers | 35,915 | Entry not found |
ravinyu/codeparrot | 555ce5341da3dad37ee7c58cfa1627a28669274d | 2022-01-20T05:21:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ravinyu | null | ravinyu/codeparrot | 0 | null | transformers | 35,916 | Entry not found |
ravirajoshi/wav2vec2-large-xls-r-300m-hindi-lm-boosted | a257e1bb57514e7456e057cfffe7361a50267c22 | 2022-03-24T11:54:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ravirajoshi | null | ravirajoshi/wav2vec2-large-xls-r-300m-hindi-lm-boosted | 0 | null | transformers | 35,917 | ---
language:
- hi
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-large-xls-r-300m-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7049
- Wer: 0.3200
|
ravirajoshi/wav2vec2-large-xls-r-300m-marathi-lm-boosted | 54b3f920a7567bd0948b77aec3c141c2cb45fa60 | 2022-03-24T11:58:30.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"transformers",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ravirajoshi | null | ravirajoshi/wav2vec2-large-xls-r-300m-marathi-lm-boosted | 0 | null | transformers | 35,918 | ---
language:
- mr
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
model-index:
- name: wav2vec2-large-xls-r-300m-marathi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5656
- Wer: 0.2156
|
ravishs/wav2vec2-large-xls-r-300m-tamil-colab | 8ffae7b779f75b280d6467abbd6e73ab0bfa15df | 2022-02-03T12:06:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ravishs | null | ravishs/wav2vec2-large-xls-r-300m-tamil-colab | 0 | null | transformers | 35,919 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tamil-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
raybin/model_out | 7318e66c5933db166bd6cd112ab262c7d74de134 | 2021-05-20T04:03:40.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | raybin | null | raybin/model_out | 0 | null | transformers | 35,920 | Entry not found |
rays2pix/dummy-model | 5aaf07d6a5666f2857ecd82b900d78bf87d9fbeb | 2021-07-03T01:15:26.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rays2pix | null | rays2pix/dummy-model | 0 | null | transformers | 35,921 | Entry not found |
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft | 8893bef750751a30902bd8450351423172682134 | 2022-02-14T13:39:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | reach-vb | null | reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft | 0 | 1 | transformers | 35,922 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice7-lt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice7-lt-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5101
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.3491 | 31.24 | 500 | 3.9827 | 1.0 |
| 0.0421 | 62.48 | 1000 | 2.9544 | 1.0 |
| 0.0163 | 93.73 | 1500 | 2.5101 | 1.0 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
rebeccakoganlee/distilbert-base-uncased-finetuned-ner | 1947f907b8cd0b3fba6257a4114df43abd148001 | 2021-11-24T16:17:06.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | rebeccakoganlee | null | rebeccakoganlee/distilbert-base-uncased-finetuned-ner | 0 | null | transformers | 35,923 | Entry not found |
recobo/agri-sentence-transformer | c7c8254c17487fe0f30ff0dc9b651fc1064fea23 | 2022-01-24T17:36:17.000Z | [
"pytorch",
"bert",
"feature-extraction",
"english",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | recobo | null | recobo/agri-sentence-transformer | 0 | 2 | sentence-transformers | 35,924 | ---
pipeline_tag: sentence-similarity
language: english
tags:
- sentence-transformers
- sentence-similarity
- transformers
---
# recobo/agri-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was built using [recobo/agriculture-bert-uncased](https://huggingface.co/recobo/agriculture-bert-uncased), which is a BERT model trained on 6.5 million passages from the agricultural domain. Hence, this model is expected to perform well on sentence similarity tasks specifically for agricultural text data.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["A man is eating food.", "A man is eating a piece of bread"]
model = SentenceTransformer('recobo/agri-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
|
redadmiral/headlines_test_small_example | 6d341a03b1b9064f4fc2ef5af48b10edb063a65a | 2021-12-30T10:07:34.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | redadmiral | null | redadmiral/headlines_test_small_example | 0 | null | transformers | 35,925 | This Model is a fine-tuned version of T-systems [summarization model v1](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1).
We used 1000 examples of headline-content pairs from BR24 articles for the fine-tuning process.
Despite the small amount of training data, the tonality of the summarizations has changed significantly. Many of the resulting summaries do sound like a headline.
## Training
We used the following parameters for training this model:
+ base model: deutsche-telekom/mt5-small-sum-de-en-v1
+ source_prefix: "summarize: "
+ batch size: 4
+ max_source_length: 400
+ max_target_length: 35
+ weight_decay: 0.01
+ number of train epochs: 1
+ learning rate: 5e-5
## License
Since the base model is trained on the MLSUM dataset, this model may not be used for commercial use.
## Stats
| Model | Rouge1 | Rouge2 | RougeL | RougeLSum |
|-----------------------------------------|-----------|----------|-----------|-----------|
| headlines_test_small_example | 13.573500 | 3.694700 | 12.560600 | 12.60000 |
| deutsche-telekom/mt5-small-sum-de-en-v1 | 10.6488 | 2.9313 | 10.0527 | 10.0523 |
|
renBaikau/alphaDelay | b99a0c3da21a024afc07f61696ee2996ada07780 | 2021-11-22T12:21:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | renBaikau | null | renBaikau/alphaDelay | 0 | null | transformers | 35,926 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: alphaDelay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alphaDelay
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6648
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 82.3335 | 5.0 | 25 | 14.0648 | 1.0 |
| 6.1049 | 10.0 | 50 | 3.7145 | 1.0 |
| 3.9873 | 15.0 | 75 | 3.6648 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
reshinthadith/FlashFill-T5 | ca57600cac916819746481d7952449095312fe56 | 2021-11-07T05:01:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | reshinthadith | null | reshinthadith/FlashFill-T5 | 0 | null | transformers | 35,927 | |
rewardsignal/behavior_cloning | 6e2b26939d3095c6ba996833fceab8362a48c469 | 2021-06-03T15:41:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rewardsignal | null | rewardsignal/behavior_cloning | 0 | null | transformers | 35,928 | This model was trained using prompt_responses_full.csv which you can read more about [here](https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts).
All other training parameters and settings are accessible via the config.json and trainer_state.json files of the individual checkpoints |
rewardsignal/reddit_reward_model | 4b9112d42e66b4e4999a36abd284b93e83e266ac | 2021-06-04T01:35:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | rewardsignal | null | rewardsignal/reddit_reward_model | 0 | null | transformers | 35,929 | This model was trained using comparisons_train.csv which you can read more about [here](https://huggingface.co/datasets/projectaligned/reddit_writingprompts_full).
All other training parameters and settings are accessible via the config.json and trainer_state.json files of the individual checkpoints |
ricardo-filho/bert_base_faquad | 309ea34ead2d24ab4303a6267a15a22d4edccf25 | 2021-08-31T18:38:51.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ricardo-filho | null | ricardo-filho/bert_base_faquad | 0 | null | transformers | 35,930 | Entry not found |
ricardo-filho/bert_large_faquad | 256a34a3a6908cbbab59d5eb21cd5b964a22f095 | 2021-08-31T18:28:34.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ricardo-filho | null | ricardo-filho/bert_large_faquad | 0 | null | transformers | 35,931 | Entry not found |
ricardo-filho/sbertimbau-base-allnli-mnrl | 7bded7e55732aa4db0c3fd322afbe699d63147ec | 2021-08-10T21:09:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/sbertimbau-base-allnli-mnrl | 0 | null | sentence-transformers | 35,932 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8066 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 806,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 807,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ricardo-filho/sbertimbau-base-nli-sts | 09f415953f9ad6b5bf81bbf71af052198c8770d2 | 2021-08-11T03:04:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/sbertimbau-base-nli-sts | 0 | null | sentence-transformers | 35,933 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 356 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 143,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
richiellei/Childe | b169ac84420b3818a086659f1ea57c2a1d4287f2 | 2022-01-18T20:32:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | richiellei | null | richiellei/Childe | 0 | null | transformers | 35,934 | ---
tags:
- conversational
---
# Childe DialoGPT Model |
richiellei/Childe3 | 3b41aa0dc998e6e77272d93d6fc5fcb2fb70a4fb | 2022-01-18T21:38:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | richiellei | null | richiellei/Childe3 | 0 | null | transformers | 35,935 | ---
tags:
- conversational
---
# Childe3 DialoGPT Model |
rifkat/robert_BPE_zinc100k | c84f6ab4938e7b4941c369ff198f985bfffc0ba0 | 2021-07-23T17:04:02.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | rifkat | null | rifkat/robert_BPE_zinc100k | 0 | null | transformers | 35,936 | Entry not found |
rjrohit/wav2vec2-base-rj-try-5 | bcd5e9d8478fdcb6a063aba7a3cfc250fca82a8b | 2022-02-07T09:59:45.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | rjrohit | null | rjrohit/wav2vec2-base-rj-try-5 | 0 | null | transformers | 35,937 | Entry not found |
rlagusrlagus123/XTC20000 | e727e36646205a8d5c74a11d34dc671ddf50e343 | 2021-12-19T11:00:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rlagusrlagus123 | null | rlagusrlagus123/XTC20000 | 0 | null | transformers | 35,938 | ---
tags:
- conversational
---
---
#12 epochs, each batch size 2, gradient accumulation steps 2, tail 20000 |
rlagusrlagus123/XTC4096 | 82793f6e119deca4b07206b99f3e1e4f46d25ff9 | 2021-12-19T11:19:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rlagusrlagus123 | null | rlagusrlagus123/XTC4096 | 0 | null | transformers | 35,939 | ---
tags:
- conversational
---
---
#12 epochs, each batch size 4, gradient accumulation steps 1, tail 4096.
#THIS SEEMS TO BE THE OPTIMAL SETUP. |
rmicheal48/DialoGPT-small-steven_universe | f4eb531024e38c05edb8018847db6806ce827be7 | 2022-01-02T12:39:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rmicheal48 | null | rmicheal48/DialoGPT-small-steven_universe | 0 | null | transformers | 35,940 | ---
tags:
- conversational
---
# Steven Universe DialoGPT Model |
rndlr96/Focalbest | 6339d85a8871403ec715b1b625ba6453b37e81dd | 2021-05-20T04:29:24.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/Focalbest | 0 | null | transformers | 35,941 | Entry not found |
rndlr96/Nfocal_label_v2 | 52a5f18644d162520c0c59e58030e1b4ae5479c2 | 2021-05-20T04:29:46.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/Nfocal_label_v2 | 0 | null | transformers | 35,942 | Entry not found |
rndlr96/Nfocal_label_v2_512 | 9815cbe94af104ebbf411de5df763afe28541765 | 2021-05-20T04:30:09.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/Nfocal_label_v2_512 | 0 | null | transformers | 35,943 | Entry not found |
rndlr96/bce_cls_5e_512 | 2e820e34d4e080b732844202283e9f38b5437ccd | 2021-05-20T04:30:32.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/bce_cls_5e_512 | 0 | null | transformers | 35,944 | Entry not found |
rndlr96/cls_256 | 1096926de1cfd3b0ff530186126314eb407abd88 | 2021-05-20T04:30:54.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/cls_256 | 0 | null | transformers | 35,945 | Entry not found |
rndlr96/kobert_cls_ipc | 8d2b761709cc8212db14ae3c408417f435fcf52d | 2021-05-20T04:31:13.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/kobert_cls_ipc | 0 | null | transformers | 35,946 | Entry not found |
rndlr96/kobert_label_ipc | ca58c53a954dc8a85a6776aeb8ba51e95e39a742 | 2021-05-20T04:31:33.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | rndlr96 | null | rndlr96/kobert_label_ipc | 0 | null | transformers | 35,947 | Entry not found |
robot-bengali-2/sahajbert2 | dccdc6429c694eb6d4ba27c641136cbb663ec09d | 2021-09-03T05:49:14.000Z | [
"pytorch",
"albert",
"transformers"
] | null | false | robot-bengali-2 | null | robot-bengali-2/sahajbert2 | 0 | null | transformers | 35,948 | Entry not found |
rodrigodz/DialoGPT-medium-dxd | 94cb0ba287584b70c02f30422fbeb7d0309e1e2e | 2021-09-07T04:50:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rodrigodz | null | rodrigodz/DialoGPT-medium-dxd | 0 | null | transformers | 35,949 | ---
tags:
- conversational
---
# Issei DialoGPT Model |
rohitsroch/hybrid_hbh_bart-base_icsi_sum | 978220e767d7287a5f0671af37d3215eeaf92c58 | 2022-06-12T23:10:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:icsi",
"transformers",
"dialogue-summarization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | rohitsroch | null | rohitsroch/hybrid_hbh_bart-base_icsi_sum | 0 | null | transformers | 35,950 | ---
language:
- en
license: apache-2.0
tags:
- dialogue-summarization
model_index:
- name: hybrid_hbh_bart-base_icsi_sum
results:
- task:
name: Summarization
type: summarization
datasets:
- icsi
---
## Paper
## [Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning](https://dl.acm.org/doi/10.1145/3508546.3508640)
Authors: *Rohit Sroch*
## Abstract
Recently, the abstractive dialogue summarization task has been gaining a lot of attention from researchers. Also, unlike news articles and documents with well-structured text, dialogue differs in the sense that it often comes from two or more interlocutors, exchanging information with each other and having an inherent hierarchical structure based on the sequence of utterances by different speakers. This paper proposes a simple but effective hybrid approach that consists of two modules and uses transfer learning by leveraging pretrained language models (PLMs) to generate an abstractive summary. The first module highlights important utterances, capturing the utterance level relationship by adapting an auto-encoding model like BERT based on the unsupervised or supervised method. And then, the second module generates a concise abstractive summary by adapting encoder-decoder models like T5, BART, and PEGASUS. Experiment results on benchmark datasets show that our approach achieves a state-of-the-art performance by adapting to dialogue scenarios and can also be helpful in low-resource settings for domain adaptation.
*Rohit Sroch. 2021. Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning. In 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence (ACAI'21). Association for Computing Machinery, New York, NY, USA, Article 94, 1–6. https://doi.org/10.1145/3508546.3508640*
# hybrid_hbh_bart-base_icsi_sum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on ICSI dataset for dialogue summarization task.
## Model description
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100.0
- label_smoothing_factor: 0.1
### Results on Test Set
- predict_gen_len = 480.0
- predict_rouge1 = **46.8707**
- predict_rouge2 = **10.1337**
- predict_rougeL = **19.3386**
- predict_rougeLsum = **43.6989**
- predict_samples = 6
- predict_samples_per_second = 0.54
- predict_steps_per_second = 0.27
### Framework versions
- Transformers>=4.8.0
- Pytorch>=1.6.0
- Datasets>=1.10.2
- Tokenizers>=0.10.3
If you use this model, please cite the following paper:
```
@inproceedings{10.1145/3508546.3508640,
author = {Sroch, Rohit},
title = {Domain Adapted Abstractive Summarization of Dialogue Using Transfer Learning},
year = {2021},
isbn = {9781450385053},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3508546.3508640},
doi = {10.1145/3508546.3508640},
articleno = {94},
numpages = {6},
keywords = {encoder-decoder, T5, abstractive summary, PEGASUS, BART, dialogue summarization, PLMs, BERT},
location = {Sanya, China},
series = {ACAI'21}
}
```
|
rohitsroch/hybrid_hbh_t5-small_ami_sum | 7e5d23b69adbee43090733a412ed96d8a5129604 | 2022-06-12T23:23:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:ami",
"transformers",
"dialogue-summarization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | rohitsroch | null | rohitsroch/hybrid_hbh_t5-small_ami_sum | 0 | null | transformers | 35,951 | ---
language:
- en
license: apache-2.0
tags:
- dialogue-summarization
model_index:
- name: hybrid_hbh_t5-small_ami_sum
results:
- task:
name: Summarization
type: summarization
datasets:
- ami
---
## Paper
## [Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning](https://dl.acm.org/doi/10.1145/3508546.3508640)
Authors: *Rohit Sroch*
## Abstract
Recently, the abstractive dialogue summarization task has been gaining a lot of attention from researchers. Also, unlike news articles and documents with well-structured text, dialogue differs in the sense that it often comes from two or more interlocutors, exchanging information with each other and having an inherent hierarchical structure based on the sequence of utterances by different speakers. This paper proposes a simple but effective hybrid approach that consists of two modules and uses transfer learning by leveraging pretrained language models (PLMs) to generate an abstractive summary. The first module highlights important utterances, capturing the utterance level relationship by adapting an auto-encoding model like BERT based on the unsupervised or supervised method. And then, the second module generates a concise abstractive summary by adapting encoder-decoder models like T5, BART, and PEGASUS. Experiment results on benchmark datasets show that our approach achieves a state-of-the-art performance by adapting to dialogue scenarios and can also be helpful in low-resource settings for domain adaptation.
*Rohit Sroch. 2021. Domain Adapted Abstractive Summarization of Dialogue using Transfer Learning. In 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence (ACAI'21). Association for Computing Machinery, New York, NY, USA, Article 94, 1–6. https://doi.org/10.1145/3508546.3508640*
# hybrid_hbh_t5-small_ami_sum
This model is a fine-tuned version of [t5-small](https://huggingface.co/best-models/H) on an AMI dataset for dialogue summarization task.
## Model description
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-4
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50.0
- label_smoothing_factor: 0.1
### Results on Test Set
- predict_gen_len = 329.2
- predict_rouge1 = **48.7673**
- predict_rouge2 = **18.1832**
- predict_rougeL = **26.1713**
- predict_rougeLsum = **46.8434**
- predict_samples = 20
- predict_samples_per_second = 1.098
- predict_steps_per_second = 0.274
### Framework versions
- Transformers>=4.8.0
- Pytorch>=1.6.0
- Datasets>=1.10.2
- Tokenizers>=0.10.3
If you use this model, please cite the following paper:
```
@inproceedings{10.1145/3508546.3508640,
author = {Sroch, Rohit},
title = {Domain Adapted Abstractive Summarization of Dialogue Using Transfer Learning},
year = {2021},
isbn = {9781450385053},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3508546.3508640},
doi = {10.1145/3508546.3508640},
articleno = {94},
numpages = {6},
keywords = {encoder-decoder, T5, abstractive summary, PEGASUS, BART, dialogue summarization, PLMs, BERT},
location = {Sanya, China},
series = {ACAI'21}
}
``` |
romuNoob/Mine | cf21154f5f446d69a44342d21fc24edf6f9efcf4 | 2021-12-16T11:39:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | romuNoob | null | romuNoob/Mine | 0 | null | transformers | 35,952 | ---
tags:
- conversational
---
# mine
|
romuNoob/test | 2877c246abbc65f388e1fab203d39991abe105f8 | 2021-12-16T16:15:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | romuNoob | null | romuNoob/test | 0 | null | transformers | 35,953 | ---
tags:
- conversational
---
# mine
|
ronanki/ml_mpnet_768_MNR_10 | 4b1c04c9d4408450df0095ad02e84a0d0f8e0a67 | 2022-02-22T18:14:36.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ronanki | null | ronanki/ml_mpnet_768_MNR_10 | 0 | null | sentence-transformers | 35,954 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/ml_mpnet_768_MNR_10
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_mpnet_768_MNR_10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/ml_mpnet_768_MNR_10')
model = AutoModel.from_pretrained('ronanki/ml_mpnet_768_MNR_10')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_mpnet_768_MNR_10)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 29 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ronanki/ml_use_512_MNR_10 | d2aab0286074eb9c70ed02b94459607d7d2a22ec | 2022-02-22T18:12:25.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | ronanki | null | ronanki/ml_use_512_MNR_10 | 0 | null | sentence-transformers | 35,955 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ronanki/ml_use_512_MNR_10
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_use_512_MNR_10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_use_512_MNR_10)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 29 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ronanki/xlmr_02-02-2022 | 57a80b46629a8fd98462dda8780ecd261d43d5a6 | 2022-01-03T13:48:37.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ronanki | null | ronanki/xlmr_02-02-2022 | 0 | null | sentence-transformers | 35,956 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_02-02-2022
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_02-02-2022')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_02-02-2022')
model = AutoModel.from_pretrained('ronanki/xlmr_02-02-2022')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_02-02-2022)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ronanki/xlmr_17-01-2022_v3 | f01d3a3afd44c78049b418bdef158a4ec1ded508 | 2022-01-17T20:34:20.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ronanki | null | ronanki/xlmr_17-01-2022_v3 | 0 | null | sentence-transformers | 35,957 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_17-01-2022_v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_17-01-2022_v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_17-01-2022_v3')
model = AutoModel.from_pretrained('ronanki/xlmr_17-01-2022_v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_17-01-2022_v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
rossanez/t5-small-finetuned-de-en-nofp16 | 47dbe88ac8e7db6b177cd910ec5669e84a1dd2b4 | 2021-12-04T13:59:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt14",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rossanez | null | rossanez/t5-small-finetuned-de-en-nofp16 | 0 | null | transformers | 35,958 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt14
metrics:
- bleu
model-index:
- name: t5-small-finetuned-de-en-nofp16
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt14
type: wmt14
args: de-en
metrics:
- name: Bleu
type: bleu
value: 9.5801
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-nofp16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1460
- Bleu: 9.5801
- Gen Len: 17.333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1899 | 9.4821 | 17.312 |
| No log | 2.0 | 376 | 2.1986 | 9.5705 | 17.3853 |
| 1.2118 | 3.0 | 564 | 2.1933 | 9.448 | 17.3293 |
| 1.2118 | 4.0 | 752 | 2.1607 | 9.563 | 17.336 |
| 1.2118 | 5.0 | 940 | 2.1460 | 9.5801 | 17.333 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
royeis/T5-FlowNLG-Planner | fc45f8c2f98bdb5690f5f5f40280f0306a04db8f | 2021-12-26T17:35:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | royeis | null | royeis/T5-FlowNLG-Planner | 0 | null | transformers | 35,959 | Entry not found |
royeis/T5-FlowNLG-Realizer | 00618656582f50e53442b6400a128bf536e91116 | 2021-12-26T17:28:10.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | royeis | null | royeis/T5-FlowNLG-Realizer | 0 | null | transformers | 35,960 | Entry not found |
rpeng35/DialoGPT-small-erenyeager | 4ef76067eb8f7311da5fa9d5e8743262b48bcec0 | 2021-08-27T22:40:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rpeng35 | null | rpeng35/DialoGPT-small-erenyeager | 0 | null | transformers | 35,961 | ---
tags:
- conversational
---
#Eren Yeager DialoGPT Model |
rpv/distilbert-base-uncased-finetuned-squad | c0ffad5c963ca4bfb1f05a2b9e81e4c1da2f0b3d | 2022-01-29T15:44:17.000Z | [
"pytorch",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | rpv | null | rpv/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 35,962 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
rtoguchi/t5-small-finetuned-en-to-ro-fp16_off | ef08c61810ae7e4128c89cfbc3dcebb992cd06f1 | 2021-12-03T13:18:24.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | rtoguchi | null | rtoguchi/t5-small-finetuned-en-to-ro-fp16_off | 0 | null | transformers | 35,963 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: t5-small-finetuned-en-to-ro-fp16_off
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3056
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-fp16_off
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4078
- Bleu: 7.3056
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6037 | 1.0 | 7629 | 1.4078 | 7.3056 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ruselkomp/distilbert-base-multilingual-cased-finetuned-squad | d321023866d51fdb272ec8dff35caa25c747d7f2 | 2021-12-20T16:14:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/distilbert-base-multilingual-cased-finetuned-squad | 0 | null | transformers | 35,964 | Entry not found |
ruselkomp/sbert_large_nlu_ru-finetuned-squad-full | 2701b2bd2f1212b469213484984b644275fc0226 | 2021-12-22T18:06:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/sbert_large_nlu_ru-finetuned-squad-full | 0 | null | transformers | 35,965 | Entry not found |
ruselkomp/sbert_large_nlu_ru-finetuned-squad | ff0438f601cdc24a2cb47496eb2bb57a10fb84da | 2021-12-22T12:15:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ruselkomp | null | ruselkomp/sbert_large_nlu_ru-finetuned-squad | 0 | null | transformers | 35,966 | Entry not found |
rwang97/wav2vec2-base-timit-demo-colab | 3009c10c9287cc4f94becfacb4332ac7b059c86d | 2021-12-08T22:12:30.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | rwang97 | null | rwang97/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,967 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4473
- Wer: 0.3380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1048 | 4.0 | 500 | 0.4370 | 0.3475 |
| 0.0871 | 8.0 | 1000 | 0.4489 | 0.3405 |
| 0.0651 | 12.0 | 1500 | 0.4473 | 0.3380 |
| 0.0703 | 16.0 | 2000 | 0.4473 | 0.3380 |
| 0.0676 | 20.0 | 2500 | 0.4473 | 0.3380 |
| 0.0714 | 24.0 | 3000 | 0.4473 | 0.3380 |
| 0.0742 | 28.0 | 3500 | 0.4473 | 0.3380 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
s3h/mt5-small-finetuned-gec | 14d6cfd85ef3bf088ec1decaf6f747b74b6153aa | 2021-12-17T21:47:04.000Z | [
"pytorch",
"mt5",
"feature-extraction",
"transformers"
] | feature-extraction | false | s3h | null | s3h/mt5-small-finetuned-gec | 0 | null | transformers | 35,968 | Entry not found |
s3h/mt5-small-finetuned-src-to-trg | 087e1b5f31b9907f71a98adcaabb11ac58b367c1 | 2021-12-18T20:34:32.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | s3h | null | s3h/mt5-small-finetuned-src-to-trg | 0 | null | transformers | 35,969 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-src-to-trg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-src-to-trg
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 40 | nan | 0.1737 | 3.1818 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.6.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
saattrupdan/xlmr-base-texas-squad-is | 097e69dc7b77b84d5f37b4ac4d41e3d5d26d42a2 | 2022-01-31T21:28:56.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | saattrupdan | null | saattrupdan/xlmr-base-texas-squad-is | 0 | null | transformers | 35,970 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-is
results: []
widget:
- text: "Hvenær var Halldór Laxness í menntaskóla ?"
context: "Halldór Laxness ( Halldór Kiljan ) fæddist í Reykjavík 23. apríl árið 1902 og átti í fyrstu heima við Laugaveg en árið 1905 settist fjölskyldan að í Laxnesi í Mosfellssveit . Þar ólst Halldór upp en sótti skóla í Reykjavík á unglingsárum . Ungur hélt hann síðan utan og var langdvölum erlendis um árabil – í ýmsum Evrópulöndum og síðar í Ameríku . Þegar hann var heima bjó hann í Reykjavík þar til hann og kona hans , Auður Sveinsdóttir , byggðu sér húsið Gljúfrastein í Mosfellssveit og fluttu þangað árið 1945 . Þar var heimili þeirra alla tíð síðan og þar er nú safn til minningar um þau . Halldór lést 8. febrúar 1998 . Skólaganga Halldórs varð ekki löng . Árið 1918 hóf hann nám við Menntaskólann í Reykjavík en hafði lítinn tíma til að læra , enda var hann að skrifa skáldsögu , Barn náttúrunnar , sem kom út haustið 1919 – þá þegar var höfundurinn ungi farinn af landi brott . Sagan vakti þó nokkra athygli og í Alþýðublaðinu sagði m.a. : „ Og hver veit nema að Halldór frá Laxnesi eigi eftir að verða óskabarn íslensku þjóðarinnar . “ Upp frá þessu sendi Halldór frá sér bók nánast á hverju ári , stundum fleiri en eina , í yfir sex áratugi . Afköst hans voru með eindæmum ; hann skrifaði fjölda skáldsagna , sumar í nokkrum hlutum , leikrit , kvæði , smásagnasöfn og endurminningabækur og gaf auk þess út mörg greinasöfn og ritgerðir . Bækurnar eru fjölbreyttar en eiga það sameiginlegt að vera skrifaðar af einstakri stílgáfu , djúpum mannskilningi og víðtækri þekkingu á sögu og samfélagi . Þar birtast oft afgerandi skoðanir á þjóðfélagsmálum og sögupersónur eru margar einkar eftirminnilegar ; tilsvör þeirra og lunderni hafa orðið samofin þjóðarsálinni . Þekktustu verk Halldórs eru eflaust skáldsögurnar stóru og rismiklu , s.s. Salka Valka , Sjálfstætt fólk , Heimsljós , Íslandsklukkan og Gerpla , og raunar mætti telja upp mun fleiri ; Kvæðabók hans er í uppáhaldi hjá mörgum sem og minningabækurnar sem hann skrifaði á efri árum um æskuár sín ; af þekktum greinasöfnum og ritgerðum má nefna Alþýðubókina og Skáldatíma . Mikið hefur verið skrifað um verk og ævi skáldsins , en hér skal aðeins bent á ítarlega frásögn og greiningu Halldórs Guðmundssonar í bókinni Halldór Laxness – ævisaga ."
---
# TExAS-SQuAD-is
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-is dataset.
It achieves the following results on the evaluation set:
- Exact match: 56.91%
- F1-score: 59.93%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1458 | 1.0 | 4219 | 1.8892 |
| 1.9202 | 2.0 | 8438 | 1.8566 |
| 1.7377 | 3.0 | 12657 | 1.8688 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
saburbutt/albert_xxlarge_tweetqa | 7150327493a59e9385599ba52d5c5e718f3bc925 | 2021-04-13T22:33:28.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/albert_xxlarge_tweetqa | 0 | null | transformers | 35,971 | |
saburbutt/roberta_large_tweetqa | 5271225a49afb4db53c4eb3f62660bf93a80fc4a | 2021-05-20T20:01:21.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/roberta_large_tweetqa | 0 | null | transformers | 35,972 | Entry not found |
saburbutt/xlmroberta_large_tweetqa | 6bf8fdbed87bc11df8f0540a6152c4648bc72ed6 | 2020-11-16T01:21:38.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/xlmroberta_large_tweetqa | 0 | null | transformers | 35,973 | Entry not found |
safik/dummy-model | b0fa95425973cbb4c56b4a80b054b948c736f75a | 2022-02-12T15:48:48.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | safik | null | safik/dummy-model | 0 | null | transformers | 35,974 | Entry not found |
saibo/blank_bert_uncased_L-2_H-128_A-2 | 70fcee235e0ca2aa79649a534e9c04b7f3b8e948 | 2021-12-13T09:26:11.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | saibo | null | saibo/blank_bert_uncased_L-2_H-128_A-2 | 0 | null | transformers | 35,975 | The weight of this model is randomly initiated and this can be particularly useful when we aim to train a language model from scratch or benchmark the effect of pretraining.
It's important to note that tokenizer of this random model is the same as the original pretrained model because it's not a trivial task to get a random tokenizer and it's less meaningful compared to the random weight.
A debatable advantage of pulling this model from Huggingface is to avoid using random seed in order to obtain the same randomness at each time.
The code to obtain such a random model:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# @Filename : random_model
# @Date : 2021-12-13-10-08
import os
from transformers import AutoModel, AutoTokenizer
def auto_name_blank_model(model_url:str):
original_model_name:str = os.path.basename(model_url)
return "blank_"+original_model_name
pretrained_model_url="google/bert_uncased_L-2_H-128_A-2"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_url)
model = AutoModel.from_pretrained(pretrained_model_url)
model.init_weights()
new_repo_name:str = auto_name_blank_model(pretrained_model_url)
model.push_to_hub(new_repo_name)
tokenizer.push_to_hub(new_repo_name)
# uploading files to an existing repo will overwrite the files without prompt.
``` |
sail/poolformer_s24 | ff108f65a6511d2bacead81d14792d4a954b1999 | 2022-04-08T07:48:50.000Z | [
"pytorch",
"poolformer",
"image-classification",
"dataset:imagenet",
"arxiv:2111.11418",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | sail | null | sail/poolformer_s24 | 0 | null | transformers | 35,976 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (S24 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s24')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s24')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| **PoolFormer-S24** | **80.3** | **21M** | **https://huggingface.co/sail/poolformer_s24** |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
sakai026/Chizuru | ef4de0e394bd0eac4ffd7a46adaa3578b1cf7169 | 2022-02-07T21:40:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sakai026 | null | sakai026/Chizuru | 0 | null | transformers | 35,977 | ---
tags:
- conversational
---
# Chizuru Ichinose GPT-Model |
salti/wav2vec2-large-xlsr-arabic-common_voice-10_epochs | fb12feb890239acd0e048473966221f6cc26be88 | 2021-05-19T13:38:15.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"model-index"
] | automatic-speech-recognition | false | salti | null | salti/wav2vec2-large-xlsr-arabic-common_voice-10_epochs | 0 | null | transformers | 35,978 | ---
model-index:
- name: wav2vec2-large-xlsr-arabic-common_voice-10_epochs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-arabic-common_voice-10_epochs
This model was trained from scratch on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3581
- Wer: 0.4555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1701 | 0.9 | 400 | 3.1599 | 1.0 |
| 0.8933 | 1.8 | 800 | 0.7198 | 0.7877 |
| 0.5849 | 2.7 | 1200 | 0.5046 | 0.6253 |
| 0.3858 | 3.6 | 1600 | 0.4247 | 0.5561 |
| 0.3083 | 4.49 | 2000 | 0.4026 | 0.5251 |
| 0.2556 | 5.39 | 2400 | 0.4010 | 0.5051 |
| 0.2221 | 6.29 | 2800 | 0.3765 | 0.4861 |
| 0.2026 | 7.19 | 3200 | 0.3652 | 0.4794 |
| 0.1996 | 8.09 | 3600 | 0.3627 | 0.4660 |
| 0.1755 | 8.99 | 4000 | 0.3582 | 0.4619 |
| 0.1697 | 9.89 | 4400 | 0.3581 | 0.4555 |
### Framework versions
- Transformers 4.6.0
- Pytorch 1.8.1+cu102
- Datasets 1.6.2
- Tokenizers 0.10.2
|
samantharhay/wav2vec2-base-libir-zenodo | 6a7698b36780aa901ad510f0790ee45d71cbc8ba | 2021-11-22T19:29:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samantharhay | null | samantharhay/wav2vec2-base-libir-zenodo | 0 | null | transformers | 35,979 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-libir-zenodo
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-libir-zenodo
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4238
- Wer: 0.4336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.053 | 1.0 | 31 | 3.1494 | 0.7345 |
| 2.9742 | 2.0 | 62 | 3.0527 | 0.7257 |
| 2.9139 | 3.0 | 93 | 2.8808 | 0.7257 |
| 2.6586 | 4.0 | 124 | 2.6648 | 0.6726 |
| 2.7117 | 5.0 | 155 | 2.4695 | 0.6372 |
| 2.5173 | 6.0 | 186 | 2.3087 | 0.6195 |
| 2.3665 | 7.0 | 217 | 2.2745 | 0.6018 |
| 2.1276 | 8.0 | 248 | 2.2180 | 0.5752 |
| 2.1624 | 9.0 | 279 | 2.1311 | 0.5752 |
| 2.0312 | 10.0 | 310 | 2.0358 | 0.5575 |
| 2.0652 | 11.0 | 341 | 1.9146 | 0.5310 |
| 1.7963 | 12.0 | 372 | 1.8346 | 0.5221 |
| 1.6811 | 13.0 | 403 | 1.8351 | 0.5398 |
| 1.5929 | 14.0 | 434 | 1.8256 | 0.4779 |
| 1.6644 | 15.0 | 465 | 1.7572 | 0.4779 |
| 1.5411 | 16.0 | 496 | 1.8740 | 0.4779 |
| 1.4027 | 17.0 | 527 | 1.5143 | 0.4779 |
| 1.2634 | 18.0 | 558 | 1.3864 | 0.4867 |
| 1.1053 | 19.0 | 589 | 1.3192 | 0.4425 |
| 1.0517 | 20.0 | 620 | 1.4705 | 0.4602 |
| 1.1033 | 21.0 | 651 | 1.6006 | 0.4956 |
| 0.9992 | 22.0 | 682 | 1.4748 | 0.5044 |
| 0.8987 | 23.0 | 713 | 1.3544 | 0.4867 |
| 0.9656 | 24.0 | 744 | 1.2673 | 0.4336 |
| 0.952 | 25.0 | 775 | 1.3955 | 0.4071 |
| 0.8507 | 26.0 | 806 | 1.3520 | 0.4425 |
| 0.8269 | 27.0 | 837 | 1.8992 | 0.4336 |
| 0.7255 | 28.0 | 868 | 1.9850 | 0.4425 |
| 0.8269 | 29.0 | 899 | 3.0089 | 0.4425 |
| 0.6178 | 30.0 | 930 | 1.4238 | 0.4336 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
samantharhay/wav2vec2-base-timit-demo-colab | 4dee1538d2c0dbf7fd0ad4a5c174eba1e93e3b4c | 2021-11-18T03:30:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samantharhay | null | samantharhay/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 35,980 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2368
- Wer: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7111 | 4.0 | 500 | 6.1429 | 1.0 |
| 4.8905 | 8.0 | 1000 | 6.0597 | 1.0 |
| 3.4516 | 12.0 | 1500 | 3.0125 | 1.0 |
| 2.9895 | 16.0 | 2000 | 2.9629 | 1.0 |
| 2.9155 | 20.0 | 2500 | 2.4479 | 1.0 |
| 2.3186 | 24.0 | 3000 | 1.5888 | 0.9565 |
| 1.8469 | 28.0 | 3500 | 1.2368 | 0.8655 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
samantharhay/wav2vec2-base-zenodo-test | 09c7dd10bedffc38e9c5eb8bade8d165190f7982 | 2021-11-22T19:11:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | samantharhay | null | samantharhay/wav2vec2-base-zenodo-test | 0 | null | transformers | 35,981 | Entry not found |
samitizerxu/wav2vec2-xls-r-300m-es | 5eaf5c719dc8735e416ff55d972bb002c8ef00ae | 2022-03-24T11:56:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | samitizerxu | null | samitizerxu/wav2vec2-xls-r-300m-es | 0 | null | transformers | 35,982 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- es
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-cls-r-300m-es
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: es
metrics:
- name: Test WER
type: wer
value: 37.37
- name: Test CER
type: cer
value: 7.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 55.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 57.28
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-cls-r-300m-es
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5160
- Wer: 0.4016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1277 | 1.14 | 500 | 2.0259 | 0.9999 |
| 1.4111 | 2.28 | 1000 | 1.1251 | 0.8894 |
| 0.8461 | 3.42 | 1500 | 0.8205 | 0.7244 |
| 0.5042 | 4.57 | 2000 | 0.6116 | 0.5463 |
| 0.3072 | 5.71 | 2500 | 0.5507 | 0.4506 |
| 0.2181 | 6.85 | 3000 | 0.5213 | 0.4177 |
| 0.1608 | 7.99 | 3500 | 0.5161 | 0.4019 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-es --dataset mozilla-foundation/common_voice_7_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-es --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` |
sammy786/wav2vec2-large-xlsr-mongolian | 80c0a7cc621ef4879ef6c39c8177cb61f92988d0 | 2021-04-02T11:36:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-large-xlsr-mongolian | 0 | null | transformers | 35,983 | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Salim Shaikh
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: {mn}
metrics:
- name: Test WER
type: wer
value: 38.14
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "sammy786/wav2vec2-large-xlsr-mongolian"
device = "cuda"
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]'
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "mn", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Test Result**: 38.14 %
|
sammy786/wav2vec2-xlsr-bashkir | 9503ad11ced5cc0d71db9238a0b94886cb0d63bb | 2022-03-23T18:35:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ba",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-bashkir | 0 | null | transformers | 35,984 | ---
language:
- ba
license: apache-2.0
tags:
- automatic-speech-recognition
- ba
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ba
metrics:
- name: Test WER
type: wer
value: 11.32
- name: Test CER
type: cer
value: 2.34
---
# sammy786/wav2vec2-xlsr-bashkir
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ba dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss:
- Wer:
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 5.387100 | 1.982867 | 1.000000 |
| 400 | 1.269800 | 0.369958 | 0.545755 |
| 600 | 0.903600 | 0.287705 | 0.465594 |
| 800 | 0.787300 | 0.235142 | 0.417091 |
| 1000 | 0.816300 | 0.206325 | 0.390534 |
| 1200 | 0.700500 | 0.197106 | 0.383987 |
| 1400 | 0.707100 | 0.179855 | 0.381368 |
| 1600 | 0.657800 | 0.181605 | 0.370593 |
| 1800 | 0.647800 | 0.168626 | 0.358767 |
| 2000 | 0.650700 | 0.164833 | 0.351483 |
| 2200 | 0.490900 | 0.168133 | 0.363309 |
| 2400 | 0.431000 | 0.161201 | 0.344350 |
| 2600 | 0.372100 | 0.160254 | 0.338280 |
| 2800 | 0.367500 | 0.150885 | 0.329687 |
| 3000 | 0.351300 | 0.154112 | 0.331392 |
| 3200 | 0.314800 | 0.147147 | 0.326700 |
| 3400 | 0.316800 | 0.142681 | 0.325090 |
| 3600 | 0.313000 | 0.138736 | 0.319553 |
| 3800 | 0.291800 | 0.138166 | 0.315570 |
| 4000 | 0.311300 | 0.135977 | 0.322894 |
| 4200 | 0.304900 | 0.128820 | 0.308627 |
| 4400 | 0.301600 | 0.129475 | 0.307440 |
| 4600 | 0.281800 | 0.131863 | 0.305967 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-bashkir --dataset mozilla-foundation/common_voice_8_0 --config ba --split test
``` |
sammy786/wav2vec2-xlsr-finnish | f692779778d4321cf95846c57be11e03b4c126aa | 2022-03-23T18:34:11.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-finnish | 0 | null | transformers | 35,985 | ---
language:
- fi
license: apache-2.0
tags:
- automatic-speech-recognition
- fi
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-finnish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: fi
metrics:
- name: Test WER
type: wer
value: 13.72
- name: Test CER
type: cer
value: 2.35
---
# sammy786/wav2vec2-xlsr-finnish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - fi dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 8.7555
- Wer: 23.0231
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv, invalidated.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.253700 | 0.881733 | 0.967007 |
| 400 | 0.864800 | 0.226977 | 0.420836 |
| 600 | 0.607000 | 0.157473 | 0.343375 |
| 800 | 0.380200 | 0.145640 | 0.302672 |
| 1000 | 0.318400 | 0.128028 | 0.293886 |
| 1200 | 0.261100 | 0.121414 | 0.289941 |
| 1400 | 0.232300 | 0.113451 | 0.279182 |
| 1600 | 0.216600 | 0.113649 | 0.282948 |
| 1800 | 0.202500 | 0.112375 | 0.276134 |
| 2000 | 0.190000 | 0.105725 | 0.273803 |
| 2200 | 0.171000 | 0.109715 | 0.270755 |
| 2400 | 0.156500 | 0.105042 | 0.264300 |
| 2600 | 0.155600 | 0.108337 | 0.260714 |
| 2800 | 0.149100 | 0.112435 | 0.263583 |
| 3000 | 0.145100 | 0.106193 | 0.261969 |
| 3200 | 0.131700 | 0.102860 | 0.251210 |
| 3400 | 0.129100 | 0.096058 | 0.246907 |
| 3600 | 0.121600 | 0.099932 | 0.246369 |
| 3800 | 0.112000 | 0.099041 | 0.244397 |
| 4000 | 0.114100 | 0.101566 | 0.242604 |
| 4200 | 0.111500 | 0.089498 | 0.239197 |
| 4400 | 0.099800 | 0.092835 | 0.240990 |
| 4600 | 0.095300 | 0.093518 | 0.238121 |
| 4800 | 0.094300 | 0.090783 | 0.240631 |
| 5000 | 0.089000 | 0.094046 | 0.238479 |
| 5200 | 0.088000 | 0.089342 | 0.235252 |
| 5400 | 0.083600 | 0.087770 | 0.234535 |
| 5600 | 0.083600 | 0.088804 | 0.234355 |
| 5800 | 0.080300 | 0.090168 | 0.231307 |
| 6000 | 0.078100 | 0.090163 | 0.230949 |
| 6200 | 0.075600 | 0.088876 | 0.232383 |
| 6400 | 0.078700 | 0.087235 | 0.232024 |
| 6600 | 0.074800 | 0.086825 | 0.231486 |
| 6800 | 0.076400 | 0.087308 | 0.231845 |
| 7000 | 0.070700 | 0.087695 | 0.230769 |
| 7200 | 0.075500 | 0.087555 | 0.230231 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-finnish --dataset mozilla-foundation/common_voice_8_0 --config fi --split test
``` |
sammy786/wav2vec2-xlsr-mongolian | b8d154569f158febe9e878743fb4016d6805f569 | 2022-03-23T18:30:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mn",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-mongolian | 0 | null | transformers | 35,986 | ---
language:
- mn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mn
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-mongolian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mn
metrics:
- name: Test WER
type: wer
value: 32.63
- name: Test CER
type: cer
value: 9.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: mn
metrics:
- name: Test WER
type: wer
value: 91.37
---
# sammy786/wav2vec2-xlsr-mongolian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - mn dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 31.52
- Wer: 34.1522
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 4.906200 | 3.012986 | 1.000000 |
| 400 | 1.734600 | 0.704821 | 0.750497 |
| 600 | 1.132100 | 0.496223 | 0.531241 |
| 800 | 0.929300 | 0.468937 | 0.469043 |
| 1000 | 0.772300 | 0.425313 | 0.448168 |
| 1200 | 0.623900 | 0.394633 | 0.414229 |
| 1400 | 0.512400 | 0.369225 | 0.397614 |
| 1600 | 0.439900 | 0.346033 | 0.391650 |
| 1800 | 0.391300 | 0.358454 | 0.379296 |
| 2000 | 0.377000 | 0.346822 | 0.359415 |
| 2200 | 0.347500 | 0.325205 | 0.348481 |
| 2400 | 0.343600 | 0.315233 | 0.344078 |
| 2600 | 0.328000 | 0.308826 | 0.341522 |
| 2800 | 0.358200 | 0.331786 | 0.343084 |
| 3000 | 0.417200 | 0.370051 | 0.356433 |
| 3200 | 0.685300 | 0.595438 | 0.407413 |
| 3400 | 0.764100 | 0.643449 | 0.359983 |
| 3600 | 0.717100 | 0.505033 | 0.371911 |
| 3800 | 0.620900 | 0.464138 | 0.369071 |
| 4000 | 0.590700 | 0.445417 | 0.363249 |
| 4200 | 0.561000 | 0.440727 | 0.360267 |
| 4400 | 0.550600 | 0.447122 | 0.360267 |
| 4600 | 0.562100 | 0.457020 | 0.359841 |
| 4800 | 0.578800 | 0.470477 | 0.360551 |
| 5000 | 0.580400 | 0.481413 | 0.362539 |
| 5200 | 0.605500 | 0.485240 | 0.362823 |
| 5400 | 0.582900 | 0.486654 | 0.362965 |
| 5600 | 0.593900 | 0.486715 | 0.363107 |
| 5800 | 0.590900 | 0.486716 | 0.363107 |
| 6000 | 0.587200 | 0.486716 | 0.363107 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-mongolian --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
``` |
samuelssonm/DialoGPT-small-rick | 9d59ae2d6842bfd8665252d2002728ea37491518 | 2021-11-29T23:30:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | samuelssonm | null | samuelssonm/DialoGPT-small-rick | 0 | null | transformers | 35,987 | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
sangrimlee/mt5-small-ans-ext | a9f4593a709157e972e0507b9675e57de9b1b13d | 2021-03-03T12:14:59.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | sangrimlee | null | sangrimlee/mt5-small-ans-ext | 0 | null | transformers | 35,988 | Entry not found |
sanjanareddy226/JakeBot | 0fb02eb7114fa64cd4fce8dc8fd2ffe76eaffe03 | 2021-10-23T06:25:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sanjanareddy226 | null | sanjanareddy226/JakeBot | 0 | null | transformers | 35,989 | ---
tags:
- conversational
---
# Jake Peralta bot |
sankalpjha1/mr.bot_haary | 9222616ce458968c6de7907f6b667d89d7ae16cf | 2021-10-21T07:21:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | sankalpjha1 | null | sankalpjha1/mr.bot_haary | 0 | null | transformers | 35,990 | ---
tags:
- conversational
---
# Mr.bot_haary |
santhoshkolloju/ques_gen | 80da3324ff407638124b2a87885da597a8339a13 | 2020-07-07T10:36:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | santhoshkolloju | null | santhoshkolloju/ques_gen | 0 | null | transformers | 35,991 | Entry not found |
saraks/cuad-distil-document_name-cased-08-31-v1 | 93ad9639e25e7856474702744bd01c3b9bda7fe0 | 2021-08-31T16:19:23.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saraks | null | saraks/cuad-distil-document_name-cased-08-31-v1 | 0 | null | transformers | 35,992 | Entry not found |
sarim/myModel | 47edacd4d65b1d76c6375e626b74533b035f2cc9 | 2021-03-20T12:53:37.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | sarim | null | sarim/myModel | 0 | null | transformers | 35,993 | first commit |
sarnikowski/convbert-small-da-cased | 6086e125654b7bba33019045a4a0a5d919e1101e | 2021-03-01T22:15:15.000Z | [
"pytorch",
"tf",
"convbert",
"da",
"arxiv:2008.02496",
"transformers",
"license:cc-by-4.0"
] | null | false | sarnikowski | null | sarnikowski/convbert-small-da-cased | 0 | null | transformers | 35,994 | ---
language: da
license: cc-by-4.0
---
# Danish ConvBERT small (cased)
[ConvBERT](https://arxiv.org/abs/2008.02496) model pretrained on a custom Danish corpus (~17.5gb).
For details regarding data sources and training procedure, along with benchmarks on downstream tasks, go to: https://github.com/sarnikowski/danish_transformers
## Usage
```python
from transformers import ConvBertTokenizer, ConvBertModel
tokenizer = ConvBertTokenizer.from_pretrained("sarnikowski/convbert-small-da-cased")
model = ConvBertModel.from_pretrained("sarnikowski/convbert-small-da-cased")
```
## Questions?
If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to [email protected]
|
sberbank-ai/RuDOLPH-350M | 568d1ea5af6e110507d9a38bac8c6979755d4727 | 2022-02-04T16:54:03.000Z | [
"pytorch"
] | null | false | sberbank-ai | null | sberbank-ai/RuDOLPH-350M | 0 | 9 | null | 35,995 | # RuDOLPH-350M (Medium)
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph-generated.png" height="60" border="2"/>
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text2image generation`; `self reranking`; `text ranking`; `image ranking`; `image2text generation`; `zero-shot image classification`, `text2text generation`;
* Language: `Russian`
* Type: `encoder-decoder`
* Num Parameters: `350M`
* Training Data Volume: `156 million text-image pairs`
# Model Description
**Ru**ssian **D**iffusion **O**n **L**anguage **P**icture **H**yper-modality (RuDOLPH) 350M is a fast and light text-image-text transformer (350M GPT-3) designed for a quick and easy fine-tuning setup for the solution of various tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-modality Transformers.
*(!!!) Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model*
# Sparse Attention Mask
The primary proposed method is to modify the sparse transformer's attention mask to better control multi-modalities and up to the next level with "hyper-modality". It allows us to calculate the transitions of modalities in both directions, unlike another similar work DALL-E Transformer, which used only one direction, "text to image". The proposed "image to right text" direction is achieved by extension sparse attention mask to the right for auto-repressively text generation with image condition without attention to left text.
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/attention_masks.png" height="40" border="2"/>
# Authors
+ Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov)
+ Michael Konstantinov: [Mishin Learning](https://t.me/mishin_learning), [Transformer Community](https://transformer.community/)
|
sberbank-ai/rudalle-Emojich | 0b1701f27f3b0f9f7912a75b4b45eea9c6e92afe | 2021-12-02T11:06:48.000Z | [
"pytorch"
] | null | false | sberbank-ai | null | sberbank-ai/rudalle-Emojich | 0 | 7 | null | 35,996 | # Emojich

### generate emojis from text
Model was trained by [Sber AI](https://github.com/sberbank-ai)
* Task: `text2image generation`
* Num Parameters: `1.3 B`
* Training Data Volume: `120 million text-image pairs` & [`2749 text-emoji pairs`](https://www.kaggle.com/shonenkov/russian-emoji)
[](https://telegram.me/addstickers/SberAI_ruDALLE)
### Model Description
😋 Emojich is a 1.3 billion params model from the family GPT3-like, it generates emoji-style images with the brain of ◾ Malevich.
### Fine-tuning stage:
The main goal of fine-tuning is trying to keep the generalization of [ruDALL-E Malevich (XL)](https://huggingface.co/sberbank-ai/rudalle-Malevich)
model on text to emoji tasks. ruDALL-E Malevich is a multi-modality big pretrained transformer, that uses images and texts.
The idea with freezing feedforward and self-attention layers in pretrained transformer is demonstrated high performance in changing different modalities.
Also, the model has a good chance for over-fitting text modality and lost generalization.
To deal with this problem is increased coefficient 10^3 in weighted cross-entropy loss for image codebooks part.
Full version of training code is available on Kaggle: [](https://www.kaggle.com/shonenkov/emojich-rudall-e)
### Examples of generated emojis
All examples are generated automatically (without manual cherry-picking) with hyper-parameters:
seed 42, batch size 16, top-k 2048, top-p 0.995, temperature 1.0, GPU A100.
For making better generative emojis should use more attempts (~512) and select the best one manually.
*Remember, the great art makers became "great" after creating just only one masterpiece.*
 |
sberbank-ai/rudalle-Malevich | 87ef2bc10f02da71b61003aeed15cc6bbc0557cf | 2022-01-11T02:20:10.000Z | [
"pytorch",
"ru",
"en",
"PyTorch",
"Transformers",
"text-to-image"
] | text-to-image | false | sberbank-ai | null | sberbank-ai/rudalle-Malevich | 0 | 24 | null | 35,997 | ---
language:
- ru
- en
pipeline_tag: text-to-image
tags:
- PyTorch
- Transformers
thumbnail: "https://github.com/sberbank-ai/ru-dalle"
---
# ruDALL-E Malevich (XL)
## Generate images from text
<img style="text-align:center; display:block;" src="https://huggingface.co/sberbank-ai/rudalle-Malevich/resolve/main/dalle-malevich.jpg" width="200">
"Avocado painting in the style of Malevich"
* [Technical Report (Russian)](https://habr.com/ru/company/sberbank/blog/586926)
* [Demo](https://rudalle.ru)
Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams.
* Task: `text2image generation`
* Type: `encoder-decoder`
* Num Parameters: `1.3 B`
* Training Data Volume: `120 million text-image pairs`
### Model Description
This is a 1.3 billion parameter model for Russian, recreating OpenAI's [DALL·E](https://openai.com/blog/dall-e/), a model capable of generating arbitrary images from a text prompt that describes the desired result.
The generation pipeline includes ruDALL-E, ruCLIP for ranging results, and a superresolution model.
You can use automatic translation into Russian to create desired images with ruDALL-E.
### How to Use
The easiest way to get familiar with the code and the models is to follow the inference notebook we provide in our [github repo](https://github.com/sberbank-ai/ru-dalle).
## Motivation
One might say that “investigate, master, and train” is our engineering motto. Well, we caught the scent, and today we can say that we created from scratch a complete pipeline for generating images from descriptive textual input written in Russian.
Teams at SberAI, SberDevices, Samara University, AIRI and SberCloud all actively contributed.
We trained two versions of the model, each a different size, and named them after Russia’s great abstractionists: Vasily Kandinsky and Kazimir Malevich.
* ruDALL-E Kandinsky (XXL), with 12 billion parameters
* ruDALL-E Malevich (XL), having 1.3 billion parameters
Some of our models are already freely available:
* ruDALL-E Malevich (XL) [[GitHub](https://github.com/sberbank-ai/ru-dalle), [HuggingFace](https://huggingface.co/sberbank-ai/rudalle-Malevich)]
* Sber VQ-GAN [[GitHub](https://github.com/sberbank-ai/sber-vq-gan), [HuggingFace](https://huggingface.co/sberbank-ai/Sber-VQGAN)]
* ruCLIP Small [[GitHub](https://github.com/sberbank-ai/ru-clip), [HuggingFace](https://huggingface.co/sberbank-ai/ru-clip)]
* Super Resolution (Real ESRGAN) [[GitHub](https://github.com/sberbank-ai/Real-ESRGAN), [HuggingFace](https://huggingface.co/sberbank-ai/Real-ESRGAN)]
The latter two models are included in the pipeline for generating images from text (as you’ll see later on).
The models ruDALL-E Malevich (XL), ruDALL-E Kandinsky (XXL), ruCLIP Small, ruCLIP Large, and Super Resolution (Real ESRGAN) will also soon be available on [DataHub](https://mlspace.aicloud.sbercloud.ru/mlspace/datahub).
Training the ruDALL-E neural networks on the Christofari cluster has become the largest calculation task in Russia:
* ruDALL-E Kandinsky (XXL) was trained for 37 days on the 512 GPU TESLA V100, and then also for 11 more days on the 128 GPU TESLA V100, for a total of 20,352 GPU-days;
* ruDALL-E Malevich (XL) was trained for 8 days on the 128 GPU TESLA V100, and then also for 15 more days on the 192 GPU TESLA V100, for a total of 3,904 GPU-days.
Accordingly, training for both models totalled 24,256 GPU-days.
## Model capabilities
The long term goal of this research is the creation of multimodal neural networks. They will be able to pull on concepts from a variety of mediums---from text and visuals at first---in order to better understand the world as a whole.
Image generation might seem like the wrong rabbit hole in our century of big data and search engines. But it actually addresses two important requirements that search is currently unable to cope with:
1. Being able to describe in writing exactly what you’re looking for and getting a completely new image created personally for you.
2. Being able to create at any time as many license-free illustrations as you could possibly want
"Grand Canyon"
<img style="text-align:center; display:block;" src="https://habrastorage.org/webt/kb/sv/ih/kbsvihfsmz3fx5mvitii0seimi0.jpeg" width="800">
"Salvador Dali picture"
<img style="text-align:center; display:block;" src="https://habrastorage.org/webt/r8/nl/oi/r8nloiq-l8j2ckg6pzh2pufsklm.jpeg" width="800">
"An eagle sits in a tree, looking to the side"
<img style="text-align:center; display:block;" src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/10a/19c/fa2/10a19cfa2cc84aa7c8b99820890e908d.png" width="800">
"Elegant living room with green stuffed chairs"
<img style="text-align:center; display:block;" src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/6fe/e69/d7c/6fee69d7c392239d587725799e0e41e4.png" width="800">
“Raccoon with a gun”
<img style="text-align:center; display:block;" src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/3bb/1b8/7c4/3bb1b87c45bf9305cd342ae9900ac245.png" width="800">
“Pretty lake at sunset”
<img style="text-align:center; display:block;" src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/241/781/fe9/241781fe99da510d4d5fea03af635e88.png" width="800">
|
seduerr/pai_comma | 3ba41e36bfbbbce0c1acf9c25b683cbd07f16ac0 | 2021-05-05T14:38:46.000Z | [
"pytorch"
] | null | false | seduerr | null | seduerr/pai_comma | 0 | null | null | 35,998 | Entry not found |
seduerr/pai_con | a76d539afcf9da08138fd7de4354d6e2b5ad7114 | 2021-06-23T14:12:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | seduerr | null | seduerr/pai_con | 0 | null | transformers | 35,999 | ‘contrast: ‘ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.