modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_10 | 2d93e2d26d7f14d9817c11f74606246a9e934ec7 | 2022-01-05T10:16:39.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_10 | 2 | null | transformers | 22,900 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | 88e879a2ec8e5602e7f5601410978ff872560e68 | 2022-01-18T03:52:44.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | 2 | null | transformers | 22,901 | Entry not found |
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | d20c39830c2363e094f52c54f464683906bbd4fb | 2022-01-05T10:19:27.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | 2 | null | transformers | 22,902 | Entry not found |
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1 | 261cc9bba50dc4aa9c42ba227f696952e7c313d2 | 2022-01-10T21:07:58.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1 | 2 | null | transformers | 22,903 | Entry not found |
AnonymousSub/specter-bert-model | 210d51a593982803cc10a9a3d78a519b4cb6adc4 | 2021-11-05T10:29:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/specter-bert-model | 2 | null | transformers | 22,904 | Entry not found |
AnonymousSub/unsup-consert-base | 48cba7d663ae810c6a810d3b405636102985f4de | 2021-09-04T17:44:20.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/unsup-consert-base | 2 | null | transformers | 22,905 | Entry not found |
AnonymousSub/unsup-consert-base_copy | e148848aec7136eb92fcbed849054f3bbe63757a | 2022-01-23T04:50:37.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/unsup-consert-base_copy | 2 | null | transformers | 22,906 | Entry not found |
AnonymousSub/unsup-consert-base_squad2.0 | 8f7dd7ce2d00166d063001f5c8aa363a367e261b | 2022-01-17T17:35:27.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/unsup-consert-base_squad2.0 | 2 | null | transformers | 22,907 | Entry not found |
AnonymousSub/unsup-consert-papers | abc8a65a542e05dbdc730cf69dca08c3d441ef11 | 2021-10-25T00:23:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/unsup-consert-papers | 2 | null | transformers | 22,908 | Entry not found |
Anorak/nirvana | 430dc9dcd47990c1c96d38f25895e2f7198035bf | 2021-10-17T15:48:15.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Anorak/autonlp-data-Niravana-test2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | Anorak | null | Anorak/nirvana | 2 | null | transformers | 22,909 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anorak/autonlp-data-Niravana-test2
co2_eq_emissions: 4.214012748213151
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 20384195
- CO2 Emissions (in grams): 4.214012748213151
## Validation Metrics
- Loss: 1.0120062828063965
- Rouge1: 41.1808
- Rouge2: 26.2564
- RougeL: 31.3106
- RougeLsum: 38.9991
- Gen Len: 58.45
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Anorak/autonlp-Niravana-test2-20384195
``` |
AnthonyNelson/DialoGPT-small-ricksanchez | f6757b250cab82699ada672cff0a3572ad366152 | 2021-09-05T22:27:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AnthonyNelson | null | AnthonyNelson/DialoGPT-small-ricksanchez | 2 | null | transformers | 22,910 | ---
tags:
- conversational
---
# Rick Sanchez DialoGPT Model |
Apisate/DialoGPT-small-jordan | cbb91cb83b5a48e3d4aa8f96c7eb3b5058821418 | 2021-12-05T18:13:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Apisate | null | Apisate/DialoGPT-small-jordan | 2 | null | transformers | 22,911 | ---
tags:
- conversational
---
# Jordan DialoGPT Model |
ArBert/albert-base-v2-finetuned-ner-gmm-twitter | bdda2ec79b3fd4f0fc70b89814e36645b1caece0 | 2022-02-12T09:26:34.000Z | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/albert-base-v2-finetuned-ner-gmm-twitter | 2 | null | transformers | 22,912 | Entry not found |
ArBert/roberta-base-finetuned-ner-kmeans-twitter | eede78cacf892879f008da686402d81d549d1669 | 2022-02-12T12:53:00.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/roberta-base-finetuned-ner-kmeans-twitter | 2 | null | transformers | 22,913 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-kmeans-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-kmeans-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ArBert/roberta-base-finetuned-ner | 06ce81bfc506df27fd20ddb7fa8561a2ce34402f | 2022-02-03T16:42:50.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ArBert | null | ArBert/roberta-base-finetuned-ner | 2 | null | transformers | 22,914 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0738
- Precision: 0.9232
- Recall: 0.9437
- F1: 0.9333
- Accuracy: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 |
| 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 |
| 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Aran/DialoGPT-medium-harrypotter | fca320589a088c982735b8e0d93521a9ee9320a0 | 2021-11-21T19:35:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Aran | null | Aran/DialoGPT-medium-harrypotter | 2 | null | transformers | 22,915 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Arcktosh/DialoGPT-small-rick | 16228b8755ccfcbd93591fae74ae59764d182867 | 2021-09-03T19:05:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Arcktosh | null | Arcktosh/DialoGPT-small-rick | 2 | null | transformers | 22,916 | ---
tags:
- conversational
---
# Rick DialoGPT Model
|
Ateeb/QA | 00974c5ac9f8ba56d1368747c21d64ba5084b545 | 2021-05-03T11:41:12.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Ateeb | null | Ateeb/QA | 2 | null | transformers | 22,917 | Entry not found |
Augustvember/wokka5 | 9172900f98de94103f79dd3df8050b48d3bf5bbe | 2021-08-08T16:47:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Augustvember | null | Augustvember/wokka5 | 2 | null | transformers | 22,918 | ---
tags:
- conversational
---
#MyAwesomeModel |
AvatarXD/DialoGPT-medium-Blitzo | 5d3d5f6a3f90a06bd468e0eb03dbf042a4bd55f1 | 2021-09-23T23:59:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AvatarXD | null | AvatarXD/DialoGPT-medium-Blitzo | 2 | null | transformers | 22,919 | ---
tags:
- conversational
---
# Blitzo DialoGPT Model |
Awsaf/large-eren | 43eca0d3e80d39480dd9b4983d676627de15a986 | 2021-09-21T14:38:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Awsaf | null | Awsaf/large-eren | 2 | null | transformers | 22,920 | ---
tags:
- conversational
---
# Eren Yeager Model |
Aybars/ModelOnTquad | 363b905632e8c3c07d0e1b684dcfd9cca80679e6 | 2022-02-17T06:52:42.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Aybars | null | Aybars/ModelOnTquad | 2 | null | transformers | 22,921 | Entry not found |
AyushPJ/ai-club-inductions-21-nlp-distilBERT | 467847f143b2680f92edcc9650f40e4de750de0e | 2021-10-20T23:38:45.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-distilBERT | 2 | null | transformers | 22,922 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-distilBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-distilBERT
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cu110
- Datasets 1.14.0
- Tokenizers 0.10.3
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa | 813707accc930a7b5b9422e39a0c2f6e789fde0e | 2021-10-20T22:33:57.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-roBERTa | 2 | null | transformers | 22,923 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-roBERTa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-roBERTa
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
BSen/wav2vec2-base-timit-demo-colab | 7b9e60d0b3c8c9dc0c7b6d6fd447f2987691e7d3 | 2021-12-02T07:51:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | BSen | null | BSen/wav2vec2-base-timit-demo-colab | 2 | null | transformers | 22,924 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4877
- Wer: 0.4895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6615 | 4.0 | 500 | 1.7423 | 1.0723 |
| 0.8519 | 8.0 | 1000 | 0.4877 | 0.4895 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Baybars/wav2vec2-xls-r-300m-cv8-turkish | 2362365a60811ed6740ec7702b2a28aca8914715 | 2022-03-23T18:34:22.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Baybars | null | Baybars/wav2vec2-xls-r-300m-cv8-turkish | 2 | 0 | transformers | 22,925 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4164
- Wer: 0.3098
- Cer: 0.0764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Language Model
N-gram language model is trained by [mpoyraz](https://huggingface.co/mpoyraz/wav2vec2-xls-r-300m-cv7-turkish) on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.6356 | 9.09 | 500 | 0.5055 | 0.5536 | 0.1381 |
| 0.3847 | 18.18 | 1000 | 0.4002 | 0.4247 | 0.1065 |
| 0.3377 | 27.27 | 1500 | 0.4193 | 0.4167 | 0.1078 |
| 0.2175 | 36.36 | 2000 | 0.4351 | 0.3861 | 0.0974 |
| 0.2074 | 45.45 | 2500 | 0.3962 | 0.3622 | 0.0916 |
| 0.159 | 54.55 | 3000 | 0.4062 | 0.3526 | 0.0888 |
| 0.1882 | 63.64 | 3500 | 0.3991 | 0.3445 | 0.0850 |
| 0.1766 | 72.73 | 4000 | 0.4214 | 0.3396 | 0.0847 |
| 0.116 | 81.82 | 4500 | 0.4182 | 0.3265 | 0.0812 |
| 0.0718 | 90.91 | 5000 | 0.4259 | 0.3191 | 0.0781 |
| 0.019 | 100.0 | 5500 | 0.4164 | 0.3098 | 0.0764 |
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id Baybars/wav2vec2-xls-r-300m-cv8-turkish --dataset mozilla-foundation/common_voice_8_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id Baybars/wav2vec2-xls-r-300m-cv8-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Benicio/t5-small-finetuned-en-to-ru | 6b0c1f6d967cbd4066a7c5edb93619c1bad5234d | 2021-11-28T14:16:01.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Benicio | null | Benicio/t5-small-finetuned-en-to-ru | 2 | null | transformers | 22,926 | Entry not found |
Biasface/DDDC2 | 883c55830d52448e6a1ccb95cc5e9276a378d2b6 | 2021-11-30T17:29:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Biasface | null | Biasface/DDDC2 | 2 | null | transformers | 22,927 | ---
tags:
- conversational
---
#hi |
BigSalmon/GPT2HardandEasy | cf3caa58029b484cb6c3226b2bbf550b107c068d | 2021-09-25T22:25:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPT2HardandEasy | 2 | null | transformers | 22,928 | Entry not found |
BigSalmon/GPTNeo350MInformalToFormalLincoln | b9c0b9d9ae46cc6cef37cd63cc8d552702895041 | 2022-02-17T21:37:07.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTNeo350MInformalToFormalLincoln | 2 | null | transformers | 22,929 | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
``` |
BigSalmon/GPTNeo350MInformalToFormalLincoln2 | 5013946a3ff05515ca853fea4530a8a45e0dc769 | 2022-02-21T00:14:01.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTNeo350MInformalToFormalLincoln2 | 2 | null | transformers | 22,930 | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
``` |
BigSalmon/GPTNeo350MInformalToFormalLincoln3 | 74158705fed97ead772f1adb884b212009a844dd | 2022-02-25T05:04:02.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTNeo350MInformalToFormalLincoln3 | 2 | null | transformers | 22,931 | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
``` |
BigSalmon/InformalToFormalLincoln14 | 4e632b5da9f23eec30130dfd3e428c0fb5f1d055 | 2021-12-22T22:40:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln14 | 2 | null | transformers | 22,932 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln14")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln14")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` |
BigSalmon/InformalToFormalLincoln19 | 588734714e5a49fefe30fc5b806e6661fd60e303 | 2022-02-01T04:56:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln19 | 2 | null | transformers | 22,933 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln19")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln19")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
``` |
BigSalmon/MrLincoln125MNeo | 2eefd20ccb5197104ccdf2e5a7a524fc17462958 | 2021-12-11T20:16:52.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MrLincoln125MNeo | 2 | null | transformers | 22,934 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MrLincoln125MNeo")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/MrLincoln125MNeo")
```
```
https://huggingface.co/spaces/BigSalmon/InformalToFormal
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: meteors are much harder to see, because they are only there for a fraction of a second.
Translated into the Style of Abraham Lincoln: meteors are not ( easily / readily ) detectable, lasting for mere fractions of a second.
informal english:
```` |
BigSalmon/Neo | 1159a3e719eca673dac55005b8b92e9796b4ee7b | 2021-04-07T15:05:25.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/Neo | 2 | null | transformers | 22,935 | Entry not found |
BigSalmon/Rowerta | 57c78fe4182ef165ba1abb36fc71c55abeb53acf | 2021-06-11T01:07:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/Rowerta | 2 | null | transformers | 22,936 | Entry not found |
BigSalmon/T5Salmon2 | bbaae7505425d116a8bea0992a6e48118a64e747 | 2021-06-23T02:20:46.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BigSalmon | null | BigSalmon/T5Salmon2 | 2 | null | transformers | 22,937 | Entry not found |
BigTooth/DialoGPT-small-tohru | 6d8c0877a2ccc92eb1aca42c833e828a4c5402ae | 2021-08-29T17:01:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BigTooth | null | BigTooth/DialoGPT-small-tohru | 2 | null | transformers | 22,938 | ---
tags:
- conversational
---
# Tohru DialoGPT model |
BigeS/DialoGPT-small-Rick | d2e046e61b06518f6d0d569e4786fe2f07e96ce1 | 2021-08-27T07:51:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BigeS | null | BigeS/DialoGPT-small-Rick | 2 | null | transformers | 22,939 | ---
tags:
- conversational
---
#Rick Sanchez DialoGPT Model |
BinksSachary/DialoGPT-small-shaxx | b760764bfc79f53d4393056dceeb6c98c2fa8840 | 2021-06-03T04:48:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BinksSachary | null | BinksSachary/DialoGPT-small-shaxx | 2 | null | transformers | 22,940 | ---
tags:
- conversational
---
# My Awesome Model |
BinksSachary/ShaxxBot2 | f2eccf5f2eda68f0b2c0b68e8fb898064af06db0 | 2021-06-03T04:37:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BinksSachary | null | BinksSachary/ShaxxBot2 | 2 | null | transformers | 22,941 | ---
tags:
- conversational
---
# My Awesome Model
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
BogdanKuloren/checkpoint-10500-finetuned-ner | 3f7376ac9906554f5c13b3ea123ee6cebdd69804 | 2021-11-30T11:25:34.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | BogdanKuloren | null | BogdanKuloren/checkpoint-10500-finetuned-ner | 2 | null | transformers | 22,942 | Entry not found |
Brokette/projetCS | 2d6035b8d7aaa277781577a79fdb73211f1651c8 | 2022-02-17T10:20:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Brokette | null | Brokette/projetCS | 2 | null | transformers | 22,943 | Entry not found |
BumBelDumBel/ZORK_AI_SCIFI | e1a10f710ea3d31ad926655aa9d64af7ad8ab4e6 | 2021-07-19T14:51:33.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | false | BumBelDumBel | null | BumBelDumBel/ZORK_AI_SCIFI | 2 | null | transformers | 22,944 | ---
tags:
- generated_from_trainer
model_index:
- name: ZORK_AI_SCIFI
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK_AI_SCIFI
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
Capreolus/birch-bert-large-car_mb | fb51ca50ffcdb74e0d682789f3f9f2562d05046b | 2021-05-18T17:38:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null | false | Capreolus | null | Capreolus/birch-bert-large-car_mb | 2 | null | transformers | 22,945 | Entry not found |
CenIA/albert-base-spanish-finetuned-qa-mlqa | 2191ccea3a9543d7a076064dabe87afa09a8bfc6 | 2022-01-18T03:15:12.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-base-spanish-finetuned-qa-mlqa | 2 | null | transformers | 22,946 | Entry not found |
CenIA/albert-large-spanish-finetuned-pos | 78429a612573d3fcaadc658ae701f4da2883cc0e | 2021-12-17T22:03:38.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | CenIA | null | CenIA/albert-large-spanish-finetuned-pos | 2 | null | transformers | 22,947 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-qa-mlqa | c51722e39ca7d4cdacb66a77a6a7ecba2abdb5d0 | 2022-01-19T20:57:30.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-qa-mlqa | 2 | null | transformers | 22,948 | Entry not found |
CenIA/albert-xlarge-spanish | c6a1f7869636684554dc3e5fc92219287c74aaa4 | 2022-04-28T19:55:48.000Z | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | false | CenIA | null | CenIA/albert-xlarge-spanish | 2 | null | transformers | 22,949 | ---
language:
- es
tags:
- albert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
---
# ALBERT XLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 6250
- Goal steps: 8000000
- Total steps: 2775000
- Total training time (aprox): 64.2 days.
## Training loss
 |
CennetOguz/distilbert-base-uncased-finetuned-imdb-accelerate | e44aae75d8a7f48a7c531cf8389ba0fb21eaeccd | 2022-02-17T17:26:44.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-imdb-accelerate | 2 | null | transformers | 22,950 | Entry not found |
Chakita/Kalbert | c6a235e92a43bb97fea5fa07158ae1153304ce11 | 2022-01-07T12:34:09.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Chakita | null | Chakita/Kalbert | 2 | null | transformers | 22,951 | Entry not found |
CheonggyeMountain-Sherpa/kogpt-trinity-poem | adaa0d5b9f3f42443aa4226a16e4812ea10c802c | 2021-12-14T09:47:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CheonggyeMountain-Sherpa | null | CheonggyeMountain-Sherpa/kogpt-trinity-poem | 2 | 1 | transformers | 22,952 | Entry not found |
Chun/w-zh2en-mtm | 860cc2827f2321346164771263bfdad86cea74ee | 2021-08-24T14:36:46.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Chun | null | Chun/w-zh2en-mtm | 2 | null | transformers | 22,953 | Entry not found |
CianB/DialoGPT-small-JohnnySilverhand2 | 3510533095127e40334a6574b76395b1156cc2fc | 2021-08-26T19:15:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CianB | null | CianB/DialoGPT-small-JohnnySilverhand2 | 2 | null | transformers | 22,954 | ---
tags:
- conversational
---
# Johnny Silverhand DialoGPT model |
Ciruzzo/DialoGPT-small-harrypotter | 11d481e9e74bb23b89f32969124c70bcab5de601 | 2021-09-07T10:47:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ciruzzo | null | Ciruzzo/DialoGPT-small-harrypotter | 2 | null | transformers | 22,955 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
ClaudeCOULOMBE/RickBot | d1b4c625e2dcef62f15f272179f4e7aa54d1bdd0 | 2021-08-12T05:56:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ClaudeCOULOMBE | null | ClaudeCOULOMBE/RickBot | 2 | null | transformers | 22,956 | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
|
CodeNinja1126/bert-p-encoder | c4fa4a4c50fb9ba9d7f2b41a848453cde831632a | 2021-05-12T01:26:46.000Z | [
"pytorch"
] | null | false | CodeNinja1126 | null | CodeNinja1126/bert-p-encoder | 2 | null | null | 22,957 | Entry not found |
CodeNinja1126/bert-q-encoder | d22103cf3e5513969aca5e57b4dd35cfe021970f | 2021-05-12T01:31:17.000Z | [
"pytorch"
] | null | false | CodeNinja1126 | null | CodeNinja1126/bert-q-encoder | 2 | null | null | 22,958 | Entry not found |
CoffeeAddict93/gpt2-call-of-the-wild | 1f5d19c69c23ccb6404174a1790d8c24175b1258 | 2021-12-02T03:08:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt2-call-of-the-wild | 2 | null | transformers | 22,959 | Entry not found |
CoffeeAddict93/gpt2-medium-modest-proposal | 47ad3cbe8e3f943bd727d892d2fe0f177788929b | 2021-12-02T03:58:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt2-medium-modest-proposal | 2 | null | transformers | 22,960 | Entry not found |
ComCom/gpt2 | 34a2b31c5f1832e471a1faaf964c29cb6949917d | 2021-11-15T04:58:28.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | false | ComCom | null | ComCom/gpt2 | 2 | null | transformers | 22,961 | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다. |
Connor/DialoGPT-small-rick | 8358391c0ec7f5d8359afff632f0dea92e4eceb6 | 2021-09-21T11:25:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Connor | null | Connor/DialoGPT-small-rick | 2 | null | transformers | 22,962 | ---
tags:
- conversational
---
# Rick DialoGPT Model |
Contrastive-Tension/BERT-Large-CT | 9c812bcfcb9a897b89893f1a241c2ed43a5d200a | 2021-05-18T18:00:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Large-CT | 2 | null | transformers | 22,963 | Entry not found |
CurtisBowser/DialoGPT-small-sora | c8b57dd7ea2f7d76e156101588a5f21697be32bf | 2021-10-20T20:36:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CurtisBowser | null | CurtisBowser/DialoGPT-small-sora | 2 | null | transformers | 22,964 | ---
tags:
- conversational
---
# Sora DialoGPT Model |
CyberMuffin/DialoGPT-small-ChandlerBot | 3a1b0139b9c8ea4a1934f6fc4db26847e1e7c40a | 2021-09-19T13:04:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CyberMuffin | null | CyberMuffin/DialoGPT-small-ChandlerBot | 2 | null | transformers | 22,965 | ---
tags:
- conversational
---
# Chandler Bot DialoGPT model |
DSI/ar_emotion_6 | 6337fe4984c4baffae85aa35614e1e8c2fad64d2 | 2021-11-13T18:48:18.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | DSI | null | DSI/ar_emotion_6 | 2 | null | transformers | 22,966 | Entry not found |
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken | 7b3ca1cd2192bd36326f483b49e3e85e9d59d4eb | 2022-02-04T17:07:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | DaisyMak | null | DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken | 2 | null | transformers | 22,967 | Entry not found |
Davlan/bert-base-multilingual-cased-finetuned-luganda | 4e2cc003ba87776151db384d44e0c171ea978f3f | 2021-06-17T17:43:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"lg",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-luganda | 2 | null | transformers | 22,968 | Hugging Face's logo
---
language: lg
datasets:
---
# bert-base-multilingual-cased-finetuned-luganda
## Model description
**bert-base-multilingual-cased-finetuned-luganda** is a **Luganda BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Luganda language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Luganda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-luganda')
>>> unmasker("Ffe tulwanyisa abo abaagala okutabangula [MASK], Kimuli bwe yategeezezza.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BUKKEDDE](https://github.com/masakhane-io/masakhane-ner/tree/main/text_by_language/luganda) +[Luganda CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | lg_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 80.36 | 84.70
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/m2m100_418M-yor-eng-mt | 72ec4bc326b504b34e4719df054d091803af319b | 2022-03-29T09:21:03.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Davlan | null | Davlan/m2m100_418M-yor-eng-mt | 2 | null | transformers | 22,969 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **16.76 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mbart50-large-yor-eng-mt | 9ea3bc7991382d5017881311e1f36224174b6021 | 2021-09-26T12:40:29.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Davlan | null | Davlan/mbart50-large-yor-eng-mt | 2 | null | transformers | 22,970 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# mbart50-large-yor-eng-mt
## Model description
**mbart50-large-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *mbart-large-50* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt). The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model.
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning mbart50-large achieves **15.88 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/mt5-small-pcm-en | 1a2f4937603f832eaad5c268f3b562a654f43a10 | 2022-01-22T19:20:51.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Davlan | null | Davlan/mt5-small-pcm-en | 2 | null | transformers | 22,971 | Entry not found |
Declan/Breitbart_model_v6 | 1169a840d19a72881c61ce58115dd82b3462160d | 2021-12-15T07:36:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v6 | 2 | null | transformers | 22,972 | Entry not found |
Declan/Breitbart_model_v7 | c3bffa886fe6d6f55176d39e3aa4b0f3efe733ad | 2021-12-19T09:11:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v7 | 2 | null | transformers | 22,973 | Entry not found |
Declan/CNN_model_v1 | 897930acf0aa3f38bdecbebf231b0bd3841bff85 | 2021-12-12T08:29:34.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/CNN_model_v1 | 2 | null | transformers | 22,974 | Entry not found |
Declan/ChicagoTribune_model_v1 | 53467b29e20943facee13530164554495aa9a8e1 | 2021-12-12T01:43:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/ChicagoTribune_model_v1 | 2 | null | transformers | 22,975 | Entry not found |
Declan/ChicagoTribune_model_v3 | e4a4ec1c56c51618a0339cf81680a7ef19451a6a | 2021-12-15T08:34:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/ChicagoTribune_model_v3 | 2 | null | transformers | 22,976 | Entry not found |
Declan/ChicagoTribune_model_v5 | 4fac4c0e01f7c3f7126a1098442eb397f0c7a1aa | 2021-12-15T09:41:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/ChicagoTribune_model_v5 | 2 | null | transformers | 22,977 | Entry not found |
Declan/ChicagoTribune_model_v7 | 7ed15eb67932bfb6c207435b36fa6698e4fa5788 | 2021-12-19T10:09:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/ChicagoTribune_model_v7 | 2 | null | transformers | 22,978 | Entry not found |
Declan/FoxNews_model_v2 | 369aea614bafe7e31a99fa33b143928fc4120934 | 2021-12-15T14:10:16.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v2 | 2 | null | transformers | 22,979 | Entry not found |
Declan/HuffPost_model_v1 | e633dcd053bce23a204f4c280e65a20cface11f6 | 2021-12-13T01:18:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v1 | 2 | null | transformers | 22,980 | Entry not found |
Declan/HuffPost_model_v2 | 7212e6f4ca8e8c0951400a3147601389793dce35 | 2021-12-15T16:57:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v2 | 2 | null | transformers | 22,981 | Entry not found |
Declan/HuffPost_model_v3 | 32aaeffb480454df5a6bb5959868add95528c477 | 2021-12-15T17:25:09.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v3 | 2 | null | transformers | 22,982 | Entry not found |
Declan/HuffPost_model_v5 | 81f49c97cc9b7dde38492f9fea4e06c40108285a | 2021-12-15T19:58:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v5 | 2 | null | transformers | 22,983 | Entry not found |
Declan/HuffPost_model_v6 | df897433d63199e54900a51d0dccd13a564beef3 | 2021-12-19T13:01:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v6 | 2 | null | transformers | 22,984 | Entry not found |
Declan/NPR_model_v1 | b3cc0265f7090896be97402bbdf6691929d08fce | 2021-12-14T03:35:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NPR_model_v1 | 2 | null | transformers | 22,985 | Entry not found |
Declan/NPR_model_v2 | 4fef8096b611b24c8a2c8b1bf369a87f853ebfcf | 2021-12-15T21:29:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NPR_model_v2 | 2 | null | transformers | 22,986 | Entry not found |
Declan/NPR_model_v3 | 45ad2fa7341c7126adb5ee8c6c9e1ac9e6a8195a | 2021-12-16T01:21:21.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NPR_model_v3 | 2 | null | transformers | 22,987 | Entry not found |
Declan/NPR_model_v5 | 8b046ac6ae4408482a24c12833a4fd6974e9ce18 | 2021-12-16T03:34:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NPR_model_v5 | 2 | null | transformers | 22,988 | Entry not found |
Declan/NewYorkTimes_model_v2 | 6a0f46f2a9f4b045a43690dace646619ef213a39 | 2021-12-19T03:15:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NewYorkTimes_model_v2 | 2 | null | transformers | 22,989 | Entry not found |
Declan/Politico_model_v3 | 51c9de4dff4f2efd63369a5b6397b02bbe1618cf | 2021-12-16T05:53:56.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v3 | 2 | null | transformers | 22,990 | Entry not found |
Declan/Politico_model_v4 | 4188f0a108f046bb4cb00a0cd0d9ef68736f8d36 | 2021-12-16T07:01:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v4 | 2 | null | transformers | 22,991 | Entry not found |
Declan/Politico_model_v5 | 8f1c67c7bc298395c826ec03c77949692c5a5c0b | 2021-12-16T08:16:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v5 | 2 | null | transformers | 22,992 | Entry not found |
Declan/Politico_model_v6 | 9294c9b9a0fc776d23fcdb75e51d4b0b023c72fa | 2021-12-19T15:53:57.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v6 | 2 | null | transformers | 22,993 | Entry not found |
Declan/Reuters_model_v2 | 83bbfa04db4919a9a3827e4eddcf7896e7b04ea9 | 2021-12-16T09:59:10.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Reuters_model_v2 | 2 | null | transformers | 22,994 | Entry not found |
Declan/Reuters_model_v5 | 97bd92d3927f4398a3fe5a1f33e83ee42ec915df | 2021-12-16T19:34:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Reuters_model_v5 | 2 | null | transformers | 22,995 | Entry not found |
Declan/WallStreetJournal_model_v1 | ffe533b1d9b1211554f81c2ad9491822de38433e | 2021-12-14T20:58:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/WallStreetJournal_model_v1 | 2 | null | transformers | 22,996 | Entry not found |
Declan/WallStreetJournal_model_v5 | f6aec8c7d3b0711adc819e3af5c84de73847f111 | 2021-12-18T02:02:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/WallStreetJournal_model_v5 | 2 | null | transformers | 22,997 | Entry not found |
DeividasM/wav2vec2-large-xlsr-53-lithuanian | 14c7b2828be30a7b703797dc1acb4ac3f75db31a | 2021-07-05T14:19:00.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DeividasM | null | DeividasM/wav2vec2-large-xlsr-53-lithuanian | 2 | null | transformers | 22,998 | ---
language: lt
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Lithuanina by Deividas Mataciunas
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lt
type: common_voice
args: lt
metrics:
- name: Test WER
type: wer
value: 56.55
---
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("DeividasM/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.55 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
DeltaHub/adapter_t5-3b_mrpc | 7e70e74088b52f0d2430341d7b2872d26269a216 | 2022-02-11T09:08:52.000Z | [
"pytorch",
"transformers"
] | null | false | DeltaHub | null | DeltaHub/adapter_t5-3b_mrpc | 2 | null | transformers | 22,999 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.