modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ydl233/t5_small_model | 6aa550cd537ea06366ddabc0baa95e3c8d5c3cfc | 2021-12-03T04:47:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ydl233 | null | ydl233/t5_small_model | 0 | null | transformers | 36,300 | Entry not found |
ying-tina/temp | d2a1b707767ccdf35d16c911c07009a574cc1eb0 | 2022-01-22T03:43:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/temp | 0 | null | transformers | 36,301 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: temp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4645
- Wer: 0.3527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4324 | 0.4 | 50 | 0.5800 | 0.4458 |
| 0.4027 | 0.8 | 100 | 0.5374 | 0.4109 |
| 0.3163 | 1.2 | 150 | 0.5285 | 0.3881 |
| 0.3064 | 1.6 | 200 | 0.5161 | 0.3815 |
| 0.3235 | 2.0 | 250 | 0.4645 | 0.3527 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-32-epochs30 | 9913ce3470d5619481f2b662558f65dbea94ff4b | 2022-01-09T09:21:52.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/wav2vec2-base-timit-demo-colab-32-epochs30 | 0 | null | transformers | 36,302 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-32-epochs30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32-epochs30
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4615
- Wer: 0.3434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5243 | 4.0 | 500 | 1.4532 | 0.9540 |
| 0.6178 | 8.0 | 1000 | 0.5490 | 0.4627 |
| 0.223 | 12.0 | 1500 | 0.4513 | 0.3881 |
| 0.1299 | 16.0 | 2000 | 0.4573 | 0.3698 |
| 0.0875 | 20.0 | 2500 | 0.4950 | 0.3637 |
| 0.0613 | 24.0 | 3000 | 0.4327 | 0.3479 |
| 0.0478 | 28.0 | 3500 | 0.4615 | 0.3434 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-32-epochs50-earlystop | cb6080ea36f69319bfbd2aecfe537e1cc29ae49d | 2022-01-09T12:13:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/wav2vec2-base-timit-demo-colab-32-epochs50-earlystop | 0 | null | transformers | 36,303 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-32-epochs50-earlystop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32-epochs50-earlystop
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5208
- Wer: 0.3561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4294 | 4.0 | 500 | 1.3397 | 0.8966 |
| 0.5848 | 8.0 | 1000 | 0.4931 | 0.4585 |
| 0.2323 | 12.0 | 1500 | 0.4781 | 0.4008 |
| 0.14 | 16.0 | 2000 | 0.4294 | 0.3806 |
| 0.1026 | 20.0 | 2500 | 0.5098 | 0.3663 |
| 0.0725 | 24.0 | 3000 | 0.4527 | 0.3568 |
| 0.058 | 28.0 | 3500 | 0.5208 | 0.3561 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab-test | b690d97b77bf02b1c482d314e44602dcb62d696c | 2021-12-05T14:55:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/wav2vec2-base-timit-demo-colab-test | 0 | null | transformers | 36,304 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-test
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4283
- Wer: 0.3356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7386 | 4.0 | 500 | 2.2419 | 1.0 |
| 0.9366 | 8.0 | 1000 | 0.4789 | 0.4807 |
| 0.3118 | 12.0 | 1500 | 0.4197 | 0.3973 |
| 0.1784 | 16.0 | 2000 | 0.4216 | 0.3614 |
| 0.1297 | 20.0 | 2500 | 0.4298 | 0.3507 |
| 0.1091 | 24.0 | 3000 | 0.4365 | 0.3437 |
| 0.0819 | 28.0 | 3500 | 0.4283 | 0.3356 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ykliu1892/opus-mt-zh-de-tuned-Tatoeba-small | a9a3017a8ba23bda34f6f62279377bee17542ef5 | 2022-01-02T04:09:53.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ykliu1892 | null | ykliu1892/opus-mt-zh-de-tuned-Tatoeba-small | 0 | null | transformers | 36,305 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-zh-de-tuned-Tatoeba-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-zh-de-tuned-Tatoeba-small
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-de](https://huggingface.co/Helsinki-NLP/opus-mt-zh-de) on a refined dataset of Tatoeba German - Chinese corpus https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README.md.
It achieves the following results on the evaluation set:
- Loss: 2.2703
- Bleu: 16.504
- Gen Len: 16.6531
## Model description
More information needed
## Intended uses & limitations
Prefix used during fine-tuning: "将中文翻译成德语". This prefix is also recommended in prediction.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.7229 | 0.24 | 16000 | 2.5605 | 14.1956 | 16.2206 |
| 2.5988 | 0.49 | 32000 | 2.4447 | 14.8619 | 16.2726 |
| 2.515 | 0.73 | 48000 | 2.3817 | 15.3212 | 16.2823 |
| 2.4683 | 0.97 | 64000 | 2.3367 | 15.9043 | 16.7138 |
| 2.3873 | 1.22 | 80000 | 2.3115 | 16.1037 | 16.6369 |
| 2.3792 | 1.46 | 96000 | 2.2919 | 16.2957 | 16.6304 |
| 2.3626 | 1.7 | 112000 | 2.2790 | 16.2995 | 16.6235 |
| 2.3353 | 1.95 | 128000 | 2.2703 | 16.504 | 16.6531 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles | 56b97d94a094e8128c72c5abe93264ce2966dfb2 | 2021-11-30T13:22:24.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ykliu1892 | null | ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles | 0 | null | transformers | 36,306 | ---
tags:
- generated_from_trainer
model-index:
- name: translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-finetuned-Duolingo-Subtitles-finetuned-Duolingo-Subtitles
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles | e07a2cb8d52d60da480e037770f9a4290fe3f653 | 2021-12-13T17:37:09.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ykliu1892 | null | ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles | 0 | 1 | transformers | 36,307 | ---
tags:
- generated_from_trainer
model-index:
- name: translation-en-pt-t5-finetuned-Duolingo-Subtitles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-finetuned-Duolingo-Subtitles
This model is a fine-tuned version of [ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles](https://huggingface.co/ykliu1892/translation-en-pt-t5-finetuned-Duolingo-Subtitles) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0932
- eval_bleu: 28.4269
- eval_gen_len: 8.816
- eval_runtime: 1404.5946
- eval_samples_per_second: 106.792
- eval_steps_per_second: 3.338
- epoch: 0.52
- step: 28000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ykliu1892/translation-en-pt-t5-finetuned-Duolingo | a97e566ab8248b8571308e19e176102c4d02a0a6 | 2021-12-01T04:58:54.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ykliu1892 | null | ykliu1892/translation-en-pt-t5-finetuned-Duolingo | 0 | null | transformers | 36,308 | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translation-en-pt-t5-finetuned-Duolingo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-en-pt-t5-finetuned-Duolingo
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7362
- Bleu: 39.4725
- Gen Len: 9.002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.5429 | 0.24 | 9000 | 0.7461 | 39.4744 | 9.0 |
| 0.5302 | 0.48 | 18000 | 0.7431 | 39.7559 | 8.97 |
| 0.5309 | 0.72 | 27000 | 0.7388 | 39.6751 | 8.998 |
| 0.5336 | 0.96 | 36000 | 0.7362 | 39.4725 | 9.002 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ylh1013/ja_chatbot | 8e19ffd505c79f8576618e27b3aeecdeb8997db6 | 2022-01-23T02:24:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"finetuned_from",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | ylh1013 | null | ylh1013/ja_chatbot | 0 | null | transformers | 36,309 | ---
language:
- finetuned_from
license: mit
tags:
- generated_from_trainer
model-index:
- name: ja_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ja_chatbot
This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Tokenizers 0.10.3
|
yliu337/t5_mask_cnn_dailymail | 70be5e2acf7a0e2b95f3fd7fc20e188a3c19e7f0 | 2022-06-04T21:38:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/t5_mask_cnn_dailymail | 0 | null | transformers | 36,310 | Entry not found |
yliu337/t5_token_nonfilter_bothcontext | 41feba322090a7451c4c7ec8e17f80d5717a53a0 | 2021-09-05T01:20:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/t5_token_nonfilter_bothcontext | 0 | null | transformers | 36,311 | Entry not found |
yliu337/t5_token_nonfilter_bothcontext_padded_ctx | 6bcb300a8bf8f6e6378194909af1a5829510eb25 | 2021-09-15T01:14:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/t5_token_nonfilter_bothcontext_padded_ctx | 0 | null | transformers | 36,312 | Entry not found |
young/BertForFinance | c818d7abf7630585153f24ea946987f818ad1589 | 2021-03-17T05:13:04.000Z | [
"pytorch",
"transformers"
] | null | false | young | null | young/BertForFinance | 0 | null | transformers | 36,313 | Entry not found |
ytlin/1riatc43 | f8c6535e20483763f2fc5d57cf6be48f14dd850b | 2020-10-05T21:26:03.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/1riatc43 | 0 | null | transformers | 36,314 | Entry not found |
ytlin/2jgyqp5g | e03e6637c42a8e88ac14155e4dd1b3feb94d7b64 | 2020-10-06T06:54:48.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/2jgyqp5g | 0 | null | transformers | 36,315 | Entry not found |
ytlin/46695u38_3 | 403eef05120ee96f0812eab7aa46363248aab25f | 2021-05-23T13:51:39.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/46695u38_3 | 0 | null | transformers | 36,316 | Entry not found |
yusufmorsi/georgebot | 5ec19a640a9015980c3c5445f7e611e1df4ecb2c | 2021-11-21T21:54:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | yusufmorsi | null | yusufmorsi/georgebot | 0 | null | transformers | 36,317 | ---
tags:
- conversational
---
# George Costanza Model
|
yxchar/tlm-ag-large-scale | e85d8e664ab4dc59a7e288fd9e4fa6a8a983dfb8 | 2021-11-04T11:08:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-ag-large-scale | 0 | null | transformers | 36,318 | Entry not found |
yxchar/tlm-ag-small-scale | d3aa55a8fe2f1e9c88777b89610eb2b58b4cec0f | 2021-11-04T09:45:58.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-ag-small-scale | 0 | null | transformers | 36,319 | Entry not found |
yxchar/tlm-amazon-large-scale | 425213afb8e3cf6d980a4cc95796a650f14ad792 | 2021-11-04T13:45:00.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-amazon-large-scale | 0 | null | transformers | 36,320 | Entry not found |
yxchar/tlm-amazon-small-scale | b5e9e950b40d0eeaf92715252d611f330d3fa921 | 2021-11-04T13:26:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-amazon-small-scale | 0 | null | transformers | 36,321 | Entry not found |
yxchar/tlm-chemprot-large-scale | ec524900ada76dd3ce633bffc91d014cd64ca9b9 | 2021-11-04T14:25:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-chemprot-large-scale | 0 | null | transformers | 36,322 | Entry not found |
yxchar/tlm-chemprot-small-scale | 7f9915f7a0e3123bc99fd0202aa4ec9117dae60f | 2021-11-04T14:09:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-chemprot-small-scale | 0 | null | transformers | 36,323 | Entry not found |
yxchar/tlm-citation_intent-large-scale | 63ec60d4b24f09930ec8ba0aceab23ef2666b349 | 2021-11-04T15:03:41.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-citation_intent-large-scale | 0 | null | transformers | 36,324 | Entry not found |
yxchar/tlm-citation_intent-medium-scale | 6502485b23dbc3f3fae9cd93afd8bbce46654bc6 | 2021-11-04T14:55:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-citation_intent-medium-scale | 0 | null | transformers | 36,325 | Entry not found |
yxchar/tlm-citation_intent-small-scale | d3b9059002e08af3dc63ac9cbd72825b31cc5b49 | 2021-11-04T14:47:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-citation_intent-small-scale | 0 | null | transformers | 36,326 | Entry not found |
yxchar/tlm-hyp-large-scale | 5aed89ef25dc4b1034e4292ff553855d96973ea4 | 2021-11-04T15:42:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-hyp-large-scale | 0 | null | transformers | 36,327 | Entry not found |
yxchar/tlm-imdb-medium-scale | 87b82975e34b61010160008500b19731f073ef82 | 2021-11-04T09:43:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-imdb-medium-scale | 0 | null | transformers | 36,328 | Entry not found |
yxchar/tlm-rct-20k-large-scale | 22c465caf13bb019b7f347debae40385ef27b231 | 2021-11-04T16:02:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-rct-20k-large-scale | 0 | null | transformers | 36,329 | Entry not found |
yxchar/tlm-rct-20k-medium-scale | baf67c6765879c4c324e56fbf4a5ab0a8f775050 | 2021-11-04T17:20:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-rct-20k-medium-scale | 0 | null | transformers | 36,330 | Entry not found |
yysung53/dpr | d33c2c0b18ed241147daa8561f8f80f84d244fc9 | 2021-10-30T22:18:04.000Z | [
"pytorch",
"text_similarity",
"transformers"
] | null | false | yysung53 | null | yysung53/dpr | 0 | null | transformers | 36,331 | Entry not found |
yzhou992/NetMind-20211103-448 | a78f2d6e0bacac3cc17a466744e2fce9b45282c9 | 2021-11-03T05:47:20.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yzhou992 | null | yzhou992/NetMind-20211103-448 | 0 | null | transformers | 36,332 | Entry not found |
yzhou992/test_model2 | 66febcd1d6734a22b868915c1d0bfce3c08c7d64 | 2021-11-03T05:50:54.000Z | [
"pytorch",
"albert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yzhou992 | null | yzhou992/test_model2 | 0 | null | transformers | 36,333 | Entry not found |
zachzhang/t5_hcm | 1947584a85313c86463339539c45717a706b7470 | 2021-10-08T22:50:26.000Z | [
"pytorch"
] | null | false | zachzhang | null | zachzhang/t5_hcm | 0 | null | null | 36,334 | Entry not found |
zari/my-awesome-model | c019cff5a02bc0a02780d74e0052a2d36ce17226 | 2021-06-22T21:29:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | zari | null | zari/my-awesome-model | 0 | null | transformers | 36,335 | ---
license: apache-2.0
datasets:
- null
model_index:
- name: my-awesome-model
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 3.4934 |
| No log | 2.0 | 182 | 3.4451 |
| No log | 3.0 | 273 | 3.4356 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
zbmain/test | 1ba8a708cacf37af6ac60a21a87986afe61bfa7f | 2020-11-24T12:12:29.000Z | [
"pytorch"
] | null | false | zbmain | null | zbmain/test | 0 | null | null | 36,336 | 123
|
zen-satvik/BotGPT-medium-HP | 1a335d2e651151a6aebe745afac08dbf994adebc | 2021-08-28T07:09:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zen-satvik | null | zen-satvik/BotGPT-medium-HP | 0 | null | transformers | 36,337 | ---
tags:
conversational
---
# Harry Potter Bot GPT Model |
zeping/codeparrot | a094184fde7e3c78122fc8e16e31bd28436ec3fd | 2022-01-19T10:03:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zeping | null | zeping/codeparrot | 0 | null | transformers | 36,338 | Entry not found |
zgotter/gpt2-test | 60a92ccd47c1c10a57fbde99442fc8db3cab0d39 | 2021-10-11T07:13:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zgotter | null | zgotter/gpt2-test | 0 | null | transformers | 36,339 | Entry not found |
zhangxy-2019/cu_dstc9_dialoGPT | 15b85cd201a34b6c52ac1cb1ed5a126cd84162a2 | 2021-05-23T14:05:15.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zhangxy-2019 | null | zhangxy-2019/cu_dstc9_dialoGPT | 0 | null | transformers | 36,340 | Entry not found |
zharry29/intent_fb-th_id | 60e747d83d8c2a79a527965b835648eed92300dd | 2020-09-16T20:16:29.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-th_id | 0 | null | transformers | 36,341 | Entry not found |
zharry29/intent_fb-th_wh_id | 55e35ceda0abe1a13031ccbe59f7a13a8b92c7fa | 2020-09-16T20:17:00.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-th_wh_id | 0 | null | transformers | 36,342 | Entry not found |
zharry29/intent_snips_id | 4cefcb746824eb1b5ddb997878d2308e9a6cf371 | 2021-05-20T23:47:11.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_snips_id | 0 | null | transformers | 36,343 | Entry not found |
zhenghuabin/dummy_model | 650ecd60a48463ff392cb24c61ba0f2c4d44d628 | 2021-11-06T09:59:44.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhenghuabin | null | zhenghuabin/dummy_model | 0 | null | transformers | 36,344 | Entry not found |
zhuqing/bert-base-uncased-exp2-feminist | 1f64c99dfb16602f3adbd32e97fe80ffe5d5879f | 2021-08-28T13:07:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-exp2-feminist | 0 | null | transformers | 36,345 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-all-0 | eefd92d01121b055c77d4b3c7a2ca22f63d07218 | 2021-08-08T09:08:07.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-all-0 | 0 | null | transformers | 36,346 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-all-1 | 7a531cdfaa4f35f21e9891c3e728b7c745ad576c | 2021-08-08T10:06:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-all-1 | 0 | null | transformers | 36,347 | Entry not found |
zhuqing/bert-base-uncased-netmums-feminist-v2 | a9936660af357cc407d1046278da8b8b8e2200b8 | 2021-08-15T11:41:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-netmums-feminist-v2 | 0 | null | transformers | 36,348 | Entry not found |
zhuqing/bert-base-uncased-netmums-parent | 5ff1779fe99f13bbd1593e7a929018756f5dbaf0 | 2021-08-14T07:43:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-netmums-parent | 0 | null | transformers | 36,349 | Entry not found |
zhuqing/bert-base-uncased-reddit-lib | 92db2000a5c86c298d27ee07535a3626dd2c93bb | 2021-08-01T16:27:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-reddit-lib | 0 | null | transformers | 36,350 | Entry not found |
zhuqing/bert-base-uncased-theme2 | 6a955641db575c9e832a497827b43c8e3361e641 | 2021-07-17T07:40:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-theme2 | 0 | null | transformers | 36,351 | Entry not found |
zhuqing/comparison-bert-base-uncased-netmums-feminist | ce633f7fa72a837541d078ddd3286c7a370a1552 | 2021-08-19T19:32:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/comparison-bert-base-uncased-netmums-feminist | 0 | null | transformers | 36,352 | Entry not found |
zhuqing/comparison-distilbert-base-uncased-netmums-feminist | f39c7ee5bcc20fea038095bdd51d0c155845e04e | 2021-08-20T07:21:29.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/comparison-distilbert-base-uncased-netmums-feminist | 0 | null | transformers | 36,353 | Entry not found |
zhuqing/distilroberta-base-theme2-6000 | a53ffa9bb3f4531e1d519a67f321a135f6fffb84 | 2021-07-31T16:27:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/distilroberta-base-theme2-6000 | 0 | null | transformers | 36,354 | Entry not found |
zhuqing/roberta-base-uncased-netmums-all | 179ff25e0099f2b454ec31f8cef1657c05e57bda | 2021-08-20T09:23:54.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/roberta-base-uncased-netmums-all | 0 | null | transformers | 36,355 | Entry not found |
zinary/DialoGPT-small-rick-new | f40df2511796a5d7e90e82a990acf856e4a15e4b | 2021-09-19T07:26:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zinary | null | zinary/DialoGPT-small-rick-new | 0 | null | transformers | 36,356 | ---
tags:
- conversational
---
#Rick and Morty DialoGPT
|
zmingshi/roberta_L-12_H-768_A-12 | a6f51347b5f5ed7f931c9b00059abc6bfc7f20ef | 2021-11-29T06:47:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zmingshi | null | zmingshi/roberta_L-12_H-768_A-12 | 0 | null | transformers | 36,357 | Entry not found |
zqf03118/bert_finetuning_test | 88544fa69b07e75e6add24bb9c8d08a66b543448 | 2021-05-20T09:56:44.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zqf03118 | null | zqf03118/bert_finetuning_test | 0 | null | transformers | 36,358 | Entry not found |
zuto37/DialoGPT-small-sadao | 0221f7e03c60cb1daa16913449d731be90850c57 | 2021-09-23T09:07:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zuto37 | null | zuto37/DialoGPT-small-sadao | 0 | null | transformers | 36,359 | ---
tags:
- conversational
---
# DialoGPT Model |
zyayoung/cv-full-paper | b652f46baea9282cf60cca75dbf27a0413686f29 | 2021-11-23T06:27:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zyayoung | null | zyayoung/cv-full-paper | 0 | 1 | transformers | 36,360 | Entry not found |
nielsr/enformer-preview-v2 | 03a454c666702bfdfbf37560bcbd9e4fca9deb9b | 2022-02-24T07:09:45.000Z | [
"pytorch"
] | null | false | nielsr | null | nielsr/enformer-preview-v2 | 0 | null | null | 36,361 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-da | 2c1c1ec9ac7b42bea7788a3896158d8a7b92d4cb | 2022-02-25T09:58:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"da",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-da | 0 | null | transformers | 36,362 |
---
language:
- da
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-da
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 89.9
- type: accuracy
name: Dutch Test accuracy
value: 90.0
- type: accuracy
name: German Test accuracy
value: 88.8
- type: accuracy
name: Italian Test accuracy
value: 89.4
- type: accuracy
name: French Test accuracy
value: 89.0
- type: accuracy
name: Spanish Test accuracy
value: 91.6
- type: accuracy
name: Russian Test accuracy
value: 90.3
- type: accuracy
name: Swedish Test accuracy
value: 92.4
- type: accuracy
name: Norwegian Test accuracy
value: 87.3
- type: accuracy
name: Danish Test accuracy
value: 98.3
- type: accuracy
name: Low Saxon Test accuracy
value: 42.2
- type: accuracy
name: Akkadian Test accuracy
value: 24.0
- type: accuracy
name: Armenian Test accuracy
value: 89.1
- type: accuracy
name: Welsh Test accuracy
value: 69.2
- type: accuracy
name: Old East Slavic Test accuracy
value: 71.8
- type: accuracy
name: Albanian Test accuracy
value: 79.7
- type: accuracy
name: Slovenian Test accuracy
value: 78.9
- type: accuracy
name: Guajajara Test accuracy
value: 19.2
- type: accuracy
name: Kurmanji Test accuracy
value: 78.1
- type: accuracy
name: Turkish Test accuracy
value: 78.9
- type: accuracy
name: Finnish Test accuracy
value: 88.2
- type: accuracy
name: Indonesian Test accuracy
value: 84.8
- type: accuracy
name: Ukrainian Test accuracy
value: 88.6
- type: accuracy
name: Polish Test accuracy
value: 86.2
- type: accuracy
name: Portuguese Test accuracy
value: 91.0
- type: accuracy
name: Kazakh Test accuracy
value: 83.9
- type: accuracy
name: Latin Test accuracy
value: 79.8
- type: accuracy
name: Old French Test accuracy
value: 51.8
- type: accuracy
name: Buryat Test accuracy
value: 57.8
- type: accuracy
name: Kaapor Test accuracy
value: 12.5
- type: accuracy
name: Korean Test accuracy
value: 65.7
- type: accuracy
name: Estonian Test accuracy
value: 88.4
- type: accuracy
name: Croatian Test accuracy
value: 89.8
- type: accuracy
name: Gothic Test accuracy
value: 12.7
- type: accuracy
name: Swiss German Test accuracy
value: 44.8
- type: accuracy
name: Assyrian Test accuracy
value: 15.7
- type: accuracy
name: North Sami Test accuracy
value: 29.9
- type: accuracy
name: Naija Test accuracy
value: 38.0
- type: accuracy
name: Latvian Test accuracy
value: 88.4
- type: accuracy
name: Chinese Test accuracy
value: 43.2
- type: accuracy
name: Tagalog Test accuracy
value: 73.1
- type: accuracy
name: Bambara Test accuracy
value: 25.0
- type: accuracy
name: Lithuanian Test accuracy
value: 86.4
- type: accuracy
name: Galician Test accuracy
value: 88.1
- type: accuracy
name: Vietnamese Test accuracy
value: 65.2
- type: accuracy
name: Greek Test accuracy
value: 87.1
- type: accuracy
name: Catalan Test accuracy
value: 89.7
- type: accuracy
name: Czech Test accuracy
value: 89.0
- type: accuracy
name: Erzya Test accuracy
value: 40.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 49.9
- type: accuracy
name: Thai Test accuracy
value: 59.9
- type: accuracy
name: Marathi Test accuracy
value: 85.9
- type: accuracy
name: Basque Test accuracy
value: 77.2
- type: accuracy
name: Slovak Test accuracy
value: 90.2
- type: accuracy
name: Kiche Test accuracy
value: 26.0
- type: accuracy
name: Yoruba Test accuracy
value: 18.1
- type: accuracy
name: Warlpiri Test accuracy
value: 38.5
- type: accuracy
name: Tamil Test accuracy
value: 84.0
- type: accuracy
name: Maltese Test accuracy
value: 17.5
- type: accuracy
name: Ancient Greek Test accuracy
value: 63.8
- type: accuracy
name: Icelandic Test accuracy
value: 85.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 23.4
- type: accuracy
name: Urdu Test accuracy
value: 70.1
- type: accuracy
name: Romanian Test accuracy
value: 85.4
- type: accuracy
name: Persian Test accuracy
value: 77.9
- type: accuracy
name: Apurina Test accuracy
value: 26.0
- type: accuracy
name: Japanese Test accuracy
value: 28.6
- type: accuracy
name: Hungarian Test accuracy
value: 85.1
- type: accuracy
name: Hindi Test accuracy
value: 74.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.2
- type: accuracy
name: Komi Permyak Test accuracy
value: 39.0
- type: accuracy
name: Faroese Test accuracy
value: 79.3
- type: accuracy
name: Sanskrit Test accuracy
value: 26.8
- type: accuracy
name: Livvi Test accuracy
value: 62.8
- type: accuracy
name: Arabic Test accuracy
value: 80.8
- type: accuracy
name: Wolof Test accuracy
value: 24.3
- type: accuracy
name: Bulgarian Test accuracy
value: 91.0
- type: accuracy
name: Akuntsu Test accuracy
value: 18.5
- type: accuracy
name: Makurap Test accuracy
value: 10.3
- type: accuracy
name: Kangri Test accuracy
value: 44.7
- type: accuracy
name: Breton Test accuracy
value: 66.1
- type: accuracy
name: Telugu Test accuracy
value: 85.4
- type: accuracy
name: Cantonese Test accuracy
value: 45.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 43.0
- type: accuracy
name: Karelian Test accuracy
value: 69.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 71.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.5
- type: accuracy
name: Komi Zyrian Test accuracy
value: 33.2
- type: accuracy
name: Irish Test accuracy
value: 69.1
- type: accuracy
name: Nayini Test accuracy
value: 39.7
- type: accuracy
name: Munduruku Test accuracy
value: 11.6
- type: accuracy
name: Manx Test accuracy
value: 23.9
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.0
- type: accuracy
name: Afrikaans Test accuracy
value: 90.0
- type: accuracy
name: Old Turkish Test accuracy
value: 38.5
- type: accuracy
name: Tupinamba Test accuracy
value: 24.0
- type: accuracy
name: Belarusian Test accuracy
value: 91.0
- type: accuracy
name: Serbian Test accuracy
value: 90.4
- type: accuracy
name: Moksha Test accuracy
value: 41.2
- type: accuracy
name: Western Armenian Test accuracy
value: 82.0
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 60.3
- type: accuracy
name: Khunsari Test accuracy
value: 41.9
- type: accuracy
name: Hebrew Test accuracy
value: 94.8
- type: accuracy
name: Uyghur Test accuracy
value: 76.5
- type: accuracy
name: Chukchi Test accuracy
value: 33.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Danish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-da")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-da")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-el | 7258c1aac060a6dfc235ddeec1b3e069a3a52461 | 2022-02-25T09:58:17.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"el",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-el | 0 | null | transformers | 36,363 |
---
language:
- el
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-el
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 83.6
- type: accuracy
name: Dutch Test accuracy
value: 82.2
- type: accuracy
name: German Test accuracy
value: 82.6
- type: accuracy
name: Italian Test accuracy
value: 82.0
- type: accuracy
name: French Test accuracy
value: 78.7
- type: accuracy
name: Spanish Test accuracy
value: 82.2
- type: accuracy
name: Russian Test accuracy
value: 88.4
- type: accuracy
name: Swedish Test accuracy
value: 87.4
- type: accuracy
name: Norwegian Test accuracy
value: 82.1
- type: accuracy
name: Danish Test accuracy
value: 85.9
- type: accuracy
name: Low Saxon Test accuracy
value: 49.8
- type: accuracy
name: Akkadian Test accuracy
value: 24.4
- type: accuracy
name: Armenian Test accuracy
value: 84.0
- type: accuracy
name: Welsh Test accuracy
value: 68.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 75.0
- type: accuracy
name: Albanian Test accuracy
value: 87.7
- type: accuracy
name: Slovenian Test accuracy
value: 77.2
- type: accuracy
name: Guajajara Test accuracy
value: 25.8
- type: accuracy
name: Kurmanji Test accuracy
value: 74.3
- type: accuracy
name: Turkish Test accuracy
value: 75.3
- type: accuracy
name: Finnish Test accuracy
value: 83.4
- type: accuracy
name: Indonesian Test accuracy
value: 75.4
- type: accuracy
name: Ukrainian Test accuracy
value: 88.6
- type: accuracy
name: Polish Test accuracy
value: 84.0
- type: accuracy
name: Portuguese Test accuracy
value: 82.4
- type: accuracy
name: Kazakh Test accuracy
value: 80.5
- type: accuracy
name: Latin Test accuracy
value: 77.3
- type: accuracy
name: Old French Test accuracy
value: 52.5
- type: accuracy
name: Buryat Test accuracy
value: 56.0
- type: accuracy
name: Kaapor Test accuracy
value: 11.2
- type: accuracy
name: Korean Test accuracy
value: 59.9
- type: accuracy
name: Estonian Test accuracy
value: 83.6
- type: accuracy
name: Croatian Test accuracy
value: 84.9
- type: accuracy
name: Gothic Test accuracy
value: 20.2
- type: accuracy
name: Swiss German Test accuracy
value: 43.6
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 33.5
- type: accuracy
name: Naija Test accuracy
value: 42.7
- type: accuracy
name: Latvian Test accuracy
value: 84.9
- type: accuracy
name: Chinese Test accuracy
value: 42.1
- type: accuracy
name: Tagalog Test accuracy
value: 66.7
- type: accuracy
name: Bambara Test accuracy
value: 28.2
- type: accuracy
name: Lithuanian Test accuracy
value: 85.3
- type: accuracy
name: Galician Test accuracy
value: 82.1
- type: accuracy
name: Vietnamese Test accuracy
value: 62.8
- type: accuracy
name: Greek Test accuracy
value: 98.0
- type: accuracy
name: Catalan Test accuracy
value: 80.4
- type: accuracy
name: Czech Test accuracy
value: 85.0
- type: accuracy
name: Erzya Test accuracy
value: 43.9
- type: accuracy
name: Bhojpuri Test accuracy
value: 45.0
- type: accuracy
name: Thai Test accuracy
value: 58.6
- type: accuracy
name: Marathi Test accuracy
value: 85.3
- type: accuracy
name: Basque Test accuracy
value: 72.4
- type: accuracy
name: Slovak Test accuracy
value: 82.8
- type: accuracy
name: Kiche Test accuracy
value: 36.2
- type: accuracy
name: Yoruba Test accuracy
value: 28.9
- type: accuracy
name: Warlpiri Test accuracy
value: 38.9
- type: accuracy
name: Tamil Test accuracy
value: 83.0
- type: accuracy
name: Maltese Test accuracy
value: 22.3
- type: accuracy
name: Ancient Greek Test accuracy
value: 64.2
- type: accuracy
name: Icelandic Test accuracy
value: 80.7
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.4
- type: accuracy
name: Urdu Test accuracy
value: 53.0
- type: accuracy
name: Romanian Test accuracy
value: 83.7
- type: accuracy
name: Persian Test accuracy
value: 74.4
- type: accuracy
name: Apurina Test accuracy
value: 41.3
- type: accuracy
name: Japanese Test accuracy
value: 30.0
- type: accuracy
name: Hungarian Test accuracy
value: 80.2
- type: accuracy
name: Hindi Test accuracy
value: 60.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.1
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.2
- type: accuracy
name: Faroese Test accuracy
value: 72.9
- type: accuracy
name: Sanskrit Test accuracy
value: 40.4
- type: accuracy
name: Livvi Test accuracy
value: 65.2
- type: accuracy
name: Arabic Test accuracy
value: 76.6
- type: accuracy
name: Wolof Test accuracy
value: 28.0
- type: accuracy
name: Bulgarian Test accuracy
value: 89.6
- type: accuracy
name: Akuntsu Test accuracy
value: 26.7
- type: accuracy
name: Makurap Test accuracy
value: 18.5
- type: accuracy
name: Kangri Test accuracy
value: 43.1
- type: accuracy
name: Breton Test accuracy
value: 63.5
- type: accuracy
name: Telugu Test accuracy
value: 85.3
- type: accuracy
name: Cantonese Test accuracy
value: 48.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.6
- type: accuracy
name: Karelian Test accuracy
value: 71.0
- type: accuracy
name: Upper Sorbian Test accuracy
value: 69.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.2
- type: accuracy
name: Komi Zyrian Test accuracy
value: 36.5
- type: accuracy
name: Irish Test accuracy
value: 61.3
- type: accuracy
name: Nayini Test accuracy
value: 43.6
- type: accuracy
name: Munduruku Test accuracy
value: 29.4
- type: accuracy
name: Manx Test accuracy
value: 33.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 31.5
- type: accuracy
name: Afrikaans Test accuracy
value: 85.0
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 29.2
- type: accuracy
name: Belarusian Test accuracy
value: 89.1
- type: accuracy
name: Serbian Test accuracy
value: 85.2
- type: accuracy
name: Moksha Test accuracy
value: 43.8
- type: accuracy
name: Western Armenian Test accuracy
value: 76.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 54.8
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 88.5
- type: accuracy
name: Uyghur Test accuracy
value: 75.7
- type: accuracy
name: Chukchi Test accuracy
value: 34.8
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Greek
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-et | dfc3d406501190ec0945726c67032a39e87fd0ed | 2022-02-25T09:58:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"et",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-et | 0 | null | transformers | 36,364 |
---
language:
- et
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-et
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 82.3
- type: accuracy
name: Dutch Test accuracy
value: 80.9
- type: accuracy
name: German Test accuracy
value: 80.4
- type: accuracy
name: Italian Test accuracy
value: 78.0
- type: accuracy
name: French Test accuracy
value: 75.6
- type: accuracy
name: Spanish Test accuracy
value: 75.4
- type: accuracy
name: Russian Test accuracy
value: 88.2
- type: accuracy
name: Swedish Test accuracy
value: 89.1
- type: accuracy
name: Norwegian Test accuracy
value: 83.2
- type: accuracy
name: Danish Test accuracy
value: 87.0
- type: accuracy
name: Low Saxon Test accuracy
value: 52.2
- type: accuracy
name: Akkadian Test accuracy
value: 37.9
- type: accuracy
name: Armenian Test accuracy
value: 87.7
- type: accuracy
name: Welsh Test accuracy
value: 61.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 74.6
- type: accuracy
name: Albanian Test accuracy
value: 74.0
- type: accuracy
name: Slovenian Test accuracy
value: 77.3
- type: accuracy
name: Guajajara Test accuracy
value: 30.7
- type: accuracy
name: Kurmanji Test accuracy
value: 76.7
- type: accuracy
name: Turkish Test accuracy
value: 79.3
- type: accuracy
name: Finnish Test accuracy
value: 90.5
- type: accuracy
name: Indonesian Test accuracy
value: 84.1
- type: accuracy
name: Ukrainian Test accuracy
value: 86.9
- type: accuracy
name: Polish Test accuracy
value: 84.4
- type: accuracy
name: Portuguese Test accuracy
value: 79.6
- type: accuracy
name: Kazakh Test accuracy
value: 83.0
- type: accuracy
name: Latin Test accuracy
value: 78.5
- type: accuracy
name: Old French Test accuracy
value: 50.0
- type: accuracy
name: Buryat Test accuracy
value: 64.6
- type: accuracy
name: Kaapor Test accuracy
value: 21.2
- type: accuracy
name: Korean Test accuracy
value: 62.9
- type: accuracy
name: Estonian Test accuracy
value: 96.8
- type: accuracy
name: Croatian Test accuracy
value: 87.0
- type: accuracy
name: Gothic Test accuracy
value: 24.7
- type: accuracy
name: Swiss German Test accuracy
value: 40.7
- type: accuracy
name: Assyrian Test accuracy
value: 20.1
- type: accuracy
name: North Sami Test accuracy
value: 46.7
- type: accuracy
name: Naija Test accuracy
value: 41.8
- type: accuracy
name: Latvian Test accuracy
value: 87.9
- type: accuracy
name: Chinese Test accuracy
value: 52.1
- type: accuracy
name: Tagalog Test accuracy
value: 65.9
- type: accuracy
name: Bambara Test accuracy
value: 27.9
- type: accuracy
name: Lithuanian Test accuracy
value: 86.0
- type: accuracy
name: Galician Test accuracy
value: 74.4
- type: accuracy
name: Vietnamese Test accuracy
value: 63.7
- type: accuracy
name: Greek Test accuracy
value: 77.4
- type: accuracy
name: Catalan Test accuracy
value: 73.4
- type: accuracy
name: Czech Test accuracy
value: 87.4
- type: accuracy
name: Erzya Test accuracy
value: 53.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 52.4
- type: accuracy
name: Thai Test accuracy
value: 62.6
- type: accuracy
name: Marathi Test accuracy
value: 88.3
- type: accuracy
name: Basque Test accuracy
value: 77.1
- type: accuracy
name: Slovak Test accuracy
value: 87.0
- type: accuracy
name: Kiche Test accuracy
value: 37.8
- type: accuracy
name: Yoruba Test accuracy
value: 26.7
- type: accuracy
name: Warlpiri Test accuracy
value: 42.1
- type: accuracy
name: Tamil Test accuracy
value: 85.4
- type: accuracy
name: Maltese Test accuracy
value: 30.9
- type: accuracy
name: Ancient Greek Test accuracy
value: 65.9
- type: accuracy
name: Icelandic Test accuracy
value: 82.9
- type: accuracy
name: Mbya Guarani Test accuracy
value: 30.6
- type: accuracy
name: Urdu Test accuracy
value: 67.0
- type: accuracy
name: Romanian Test accuracy
value: 78.5
- type: accuracy
name: Persian Test accuracy
value: 73.9
- type: accuracy
name: Apurina Test accuracy
value: 47.9
- type: accuracy
name: Japanese Test accuracy
value: 38.9
- type: accuracy
name: Hungarian Test accuracy
value: 83.2
- type: accuracy
name: Hindi Test accuracy
value: 71.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 35.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 53.2
- type: accuracy
name: Faroese Test accuracy
value: 76.4
- type: accuracy
name: Sanskrit Test accuracy
value: 38.8
- type: accuracy
name: Livvi Test accuracy
value: 71.2
- type: accuracy
name: Arabic Test accuracy
value: 76.3
- type: accuracy
name: Wolof Test accuracy
value: 35.3
- type: accuracy
name: Bulgarian Test accuracy
value: 85.8
- type: accuracy
name: Akuntsu Test accuracy
value: 37.5
- type: accuracy
name: Makurap Test accuracy
value: 15.8
- type: accuracy
name: Kangri Test accuracy
value: 51.7
- type: accuracy
name: Breton Test accuracy
value: 60.1
- type: accuracy
name: Telugu Test accuracy
value: 84.2
- type: accuracy
name: Cantonese Test accuracy
value: 58.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.8
- type: accuracy
name: Karelian Test accuracy
value: 75.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 77.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 46.6
- type: accuracy
name: Irish Test accuracy
value: 60.5
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 27.1
- type: accuracy
name: Manx Test accuracy
value: 35.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.7
- type: accuracy
name: Afrikaans Test accuracy
value: 77.5
- type: accuracy
name: Old Turkish Test accuracy
value: 46.6
- type: accuracy
name: Tupinamba Test accuracy
value: 46.5
- type: accuracy
name: Belarusian Test accuracy
value: 87.1
- type: accuracy
name: Serbian Test accuracy
value: 86.9
- type: accuracy
name: Moksha Test accuracy
value: 48.3
- type: accuracy
name: Western Armenian Test accuracy
value: 80.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 51.5
- type: accuracy
name: Khunsari Test accuracy
value: 40.5
- type: accuracy
name: Hebrew Test accuracy
value: 89.6
- type: accuracy
name: Uyghur Test accuracy
value: 77.1
- type: accuracy
name: Chukchi Test accuracy
value: 38.9
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Estonian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-et")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-et")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-ga | b9d0c2a08fe20f325f8225321f0a39817a11c195 | 2022-02-25T09:58:33.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ga",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ga | 0 | null | transformers | 36,365 |
---
language:
- ga
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ga
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 75.3
- type: accuracy
name: Dutch Test accuracy
value: 79.5
- type: accuracy
name: German Test accuracy
value: 76.2
- type: accuracy
name: Italian Test accuracy
value: 73.6
- type: accuracy
name: French Test accuracy
value: 76.4
- type: accuracy
name: Spanish Test accuracy
value: 82.4
- type: accuracy
name: Russian Test accuracy
value: 80.7
- type: accuracy
name: Swedish Test accuracy
value: 78.7
- type: accuracy
name: Norwegian Test accuracy
value: 74.5
- type: accuracy
name: Danish Test accuracy
value: 77.9
- type: accuracy
name: Low Saxon Test accuracy
value: 49.0
- type: accuracy
name: Akkadian Test accuracy
value: 46.2
- type: accuracy
name: Armenian Test accuracy
value: 71.8
- type: accuracy
name: Welsh Test accuracy
value: 77.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 67.3
- type: accuracy
name: Albanian Test accuracy
value: 79.8
- type: accuracy
name: Slovenian Test accuracy
value: 65.3
- type: accuracy
name: Guajajara Test accuracy
value: 46.4
- type: accuracy
name: Kurmanji Test accuracy
value: 74.9
- type: accuracy
name: Turkish Test accuracy
value: 73.7
- type: accuracy
name: Finnish Test accuracy
value: 73.8
- type: accuracy
name: Indonesian Test accuracy
value: 78.6
- type: accuracy
name: Ukrainian Test accuracy
value: 79.9
- type: accuracy
name: Polish Test accuracy
value: 82.5
- type: accuracy
name: Portuguese Test accuracy
value: 80.6
- type: accuracy
name: Kazakh Test accuracy
value: 75.6
- type: accuracy
name: Latin Test accuracy
value: 70.0
- type: accuracy
name: Old French Test accuracy
value: 49.1
- type: accuracy
name: Buryat Test accuracy
value: 60.3
- type: accuracy
name: Kaapor Test accuracy
value: 21.2
- type: accuracy
name: Korean Test accuracy
value: 60.5
- type: accuracy
name: Estonian Test accuracy
value: 75.7
- type: accuracy
name: Croatian Test accuracy
value: 77.3
- type: accuracy
name: Gothic Test accuracy
value: 29.1
- type: accuracy
name: Swiss German Test accuracy
value: 44.3
- type: accuracy
name: Assyrian Test accuracy
value: 16.3
- type: accuracy
name: North Sami Test accuracy
value: 45.0
- type: accuracy
name: Naija Test accuracy
value: 32.0
- type: accuracy
name: Latvian Test accuracy
value: 77.7
- type: accuracy
name: Chinese Test accuracy
value: 49.6
- type: accuracy
name: Tagalog Test accuracy
value: 71.1
- type: accuracy
name: Bambara Test accuracy
value: 29.1
- type: accuracy
name: Lithuanian Test accuracy
value: 76.4
- type: accuracy
name: Galician Test accuracy
value: 80.9
- type: accuracy
name: Vietnamese Test accuracy
value: 58.6
- type: accuracy
name: Greek Test accuracy
value: 77.5
- type: accuracy
name: Catalan Test accuracy
value: 79.7
- type: accuracy
name: Czech Test accuracy
value: 78.1
- type: accuracy
name: Erzya Test accuracy
value: 52.5
- type: accuracy
name: Bhojpuri Test accuracy
value: 59.2
- type: accuracy
name: Thai Test accuracy
value: 58.7
- type: accuracy
name: Marathi Test accuracy
value: 79.1
- type: accuracy
name: Basque Test accuracy
value: 68.1
- type: accuracy
name: Slovak Test accuracy
value: 80.0
- type: accuracy
name: Kiche Test accuracy
value: 46.4
- type: accuracy
name: Yoruba Test accuracy
value: 33.1
- type: accuracy
name: Warlpiri Test accuracy
value: 40.5
- type: accuracy
name: Tamil Test accuracy
value: 78.1
- type: accuracy
name: Maltese Test accuracy
value: 36.7
- type: accuracy
name: Ancient Greek Test accuracy
value: 58.5
- type: accuracy
name: Icelandic Test accuracy
value: 71.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 34.0
- type: accuracy
name: Urdu Test accuracy
value: 65.5
- type: accuracy
name: Romanian Test accuracy
value: 76.8
- type: accuracy
name: Persian Test accuracy
value: 79.7
- type: accuracy
name: Apurina Test accuracy
value: 51.8
- type: accuracy
name: Japanese Test accuracy
value: 36.1
- type: accuracy
name: Hungarian Test accuracy
value: 77.1
- type: accuracy
name: Hindi Test accuracy
value: 69.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 32.1
- type: accuracy
name: Komi Permyak Test accuracy
value: 51.1
- type: accuracy
name: Faroese Test accuracy
value: 70.6
- type: accuracy
name: Sanskrit Test accuracy
value: 35.7
- type: accuracy
name: Livvi Test accuracy
value: 60.6
- type: accuracy
name: Arabic Test accuracy
value: 83.7
- type: accuracy
name: Wolof Test accuracy
value: 40.8
- type: accuracy
name: Bulgarian Test accuracy
value: 78.7
- type: accuracy
name: Akuntsu Test accuracy
value: 43.2
- type: accuracy
name: Makurap Test accuracy
value: 19.9
- type: accuracy
name: Kangri Test accuracy
value: 46.3
- type: accuracy
name: Breton Test accuracy
value: 61.7
- type: accuracy
name: Telugu Test accuracy
value: 76.8
- type: accuracy
name: Cantonese Test accuracy
value: 49.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 43.9
- type: accuracy
name: Karelian Test accuracy
value: 64.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 69.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 70.0
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.9
- type: accuracy
name: Irish Test accuracy
value: 86.0
- type: accuracy
name: Nayini Test accuracy
value: 46.2
- type: accuracy
name: Munduruku Test accuracy
value: 38.9
- type: accuracy
name: Manx Test accuracy
value: 57.2
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.1
- type: accuracy
name: Afrikaans Test accuracy
value: 73.0
- type: accuracy
name: Old Turkish Test accuracy
value: 39.4
- type: accuracy
name: Tupinamba Test accuracy
value: 51.8
- type: accuracy
name: Belarusian Test accuracy
value: 79.1
- type: accuracy
name: Serbian Test accuracy
value: 78.5
- type: accuracy
name: Moksha Test accuracy
value: 49.9
- type: accuracy
name: Western Armenian Test accuracy
value: 68.2
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 77.1
- type: accuracy
name: Khunsari Test accuracy
value: 50.0
- type: accuracy
name: Hebrew Test accuracy
value: 80.2
- type: accuracy
name: Uyghur Test accuracy
value: 70.2
- type: accuracy
name: Chukchi Test accuracy
value: 39.3
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Irish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-he | 0b1c313b21ca58890f2e11837cee5fdfbabd39b2 | 2022-02-25T09:58:40.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"he",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-he | 0 | null | transformers | 36,366 |
---
language:
- he
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-he
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 76.6
- type: accuracy
name: Dutch Test accuracy
value: 73.7
- type: accuracy
name: German Test accuracy
value: 70.5
- type: accuracy
name: Italian Test accuracy
value: 75.1
- type: accuracy
name: French Test accuracy
value: 71.3
- type: accuracy
name: Spanish Test accuracy
value: 74.5
- type: accuracy
name: Russian Test accuracy
value: 80.3
- type: accuracy
name: Swedish Test accuracy
value: 79.3
- type: accuracy
name: Norwegian Test accuracy
value: 75.7
- type: accuracy
name: Danish Test accuracy
value: 80.4
- type: accuracy
name: Low Saxon Test accuracy
value: 42.6
- type: accuracy
name: Akkadian Test accuracy
value: 24.1
- type: accuracy
name: Armenian Test accuracy
value: 77.0
- type: accuracy
name: Welsh Test accuracy
value: 62.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 66.2
- type: accuracy
name: Albanian Test accuracy
value: 73.9
- type: accuracy
name: Slovenian Test accuracy
value: 72.5
- type: accuracy
name: Guajajara Test accuracy
value: 21.4
- type: accuracy
name: Kurmanji Test accuracy
value: 74.2
- type: accuracy
name: Turkish Test accuracy
value: 71.8
- type: accuracy
name: Finnish Test accuracy
value: 80.5
- type: accuracy
name: Indonesian Test accuracy
value: 80.0
- type: accuracy
name: Ukrainian Test accuracy
value: 78.8
- type: accuracy
name: Polish Test accuracy
value: 78.9
- type: accuracy
name: Portuguese Test accuracy
value: 78.6
- type: accuracy
name: Kazakh Test accuracy
value: 77.2
- type: accuracy
name: Latin Test accuracy
value: 73.5
- type: accuracy
name: Old French Test accuracy
value: 50.6
- type: accuracy
name: Buryat Test accuracy
value: 45.0
- type: accuracy
name: Kaapor Test accuracy
value: 11.2
- type: accuracy
name: Korean Test accuracy
value: 60.2
- type: accuracy
name: Estonian Test accuracy
value: 81.4
- type: accuracy
name: Croatian Test accuracy
value: 77.9
- type: accuracy
name: Gothic Test accuracy
value: 13.7
- type: accuracy
name: Swiss German Test accuracy
value: 44.8
- type: accuracy
name: Assyrian Test accuracy
value: 17.0
- type: accuracy
name: North Sami Test accuracy
value: 24.8
- type: accuracy
name: Naija Test accuracy
value: 41.6
- type: accuracy
name: Latvian Test accuracy
value: 80.1
- type: accuracy
name: Chinese Test accuracy
value: 60.5
- type: accuracy
name: Tagalog Test accuracy
value: 79.2
- type: accuracy
name: Bambara Test accuracy
value: 21.1
- type: accuracy
name: Lithuanian Test accuracy
value: 81.0
- type: accuracy
name: Galician Test accuracy
value: 76.1
- type: accuracy
name: Vietnamese Test accuracy
value: 64.4
- type: accuracy
name: Greek Test accuracy
value: 67.4
- type: accuracy
name: Catalan Test accuracy
value: 71.5
- type: accuracy
name: Czech Test accuracy
value: 77.7
- type: accuracy
name: Erzya Test accuracy
value: 32.0
- type: accuracy
name: Bhojpuri Test accuracy
value: 50.7
- type: accuracy
name: Thai Test accuracy
value: 69.2
- type: accuracy
name: Marathi Test accuracy
value: 81.6
- type: accuracy
name: Basque Test accuracy
value: 76.2
- type: accuracy
name: Slovak Test accuracy
value: 78.0
- type: accuracy
name: Kiche Test accuracy
value: 23.6
- type: accuracy
name: Yoruba Test accuracy
value: 17.5
- type: accuracy
name: Warlpiri Test accuracy
value: 22.3
- type: accuracy
name: Tamil Test accuracy
value: 82.1
- type: accuracy
name: Maltese Test accuracy
value: 18.0
- type: accuracy
name: Ancient Greek Test accuracy
value: 45.4
- type: accuracy
name: Icelandic Test accuracy
value: 81.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 22.0
- type: accuracy
name: Urdu Test accuracy
value: 70.9
- type: accuracy
name: Romanian Test accuracy
value: 76.5
- type: accuracy
name: Persian Test accuracy
value: 75.4
- type: accuracy
name: Apurina Test accuracy
value: 22.2
- type: accuracy
name: Japanese Test accuracy
value: 39.4
- type: accuracy
name: Hungarian Test accuracy
value: 65.8
- type: accuracy
name: Hindi Test accuracy
value: 75.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 44.3
- type: accuracy
name: Komi Permyak Test accuracy
value: 35.0
- type: accuracy
name: Faroese Test accuracy
value: 70.8
- type: accuracy
name: Sanskrit Test accuracy
value: 12.1
- type: accuracy
name: Livvi Test accuracy
value: 52.2
- type: accuracy
name: Arabic Test accuracy
value: 77.4
- type: accuracy
name: Wolof Test accuracy
value: 24.4
- type: accuracy
name: Bulgarian Test accuracy
value: 82.1
- type: accuracy
name: Akuntsu Test accuracy
value: 17.0
- type: accuracy
name: Makurap Test accuracy
value: 8.2
- type: accuracy
name: Kangri Test accuracy
value: 39.9
- type: accuracy
name: Breton Test accuracy
value: 56.7
- type: accuracy
name: Telugu Test accuracy
value: 81.4
- type: accuracy
name: Cantonese Test accuracy
value: 57.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 40.3
- type: accuracy
name: Karelian Test accuracy
value: 60.0
- type: accuracy
name: Upper Sorbian Test accuracy
value: 61.2
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 64.5
- type: accuracy
name: Komi Zyrian Test accuracy
value: 29.0
- type: accuracy
name: Irish Test accuracy
value: 58.7
- type: accuracy
name: Nayini Test accuracy
value: 41.0
- type: accuracy
name: Munduruku Test accuracy
value: 9.5
- type: accuracy
name: Manx Test accuracy
value: 21.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.2
- type: accuracy
name: Afrikaans Test accuracy
value: 73.3
- type: accuracy
name: Old Turkish Test accuracy
value: 43.4
- type: accuracy
name: Tupinamba Test accuracy
value: 21.9
- type: accuracy
name: Belarusian Test accuracy
value: 78.5
- type: accuracy
name: Serbian Test accuracy
value: 78.9
- type: accuracy
name: Moksha Test accuracy
value: 29.7
- type: accuracy
name: Western Armenian Test accuracy
value: 69.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 51.3
- type: accuracy
name: Khunsari Test accuracy
value: 36.5
- type: accuracy
name: Hebrew Test accuracy
value: 93.8
- type: accuracy
name: Uyghur Test accuracy
value: 70.2
- type: accuracy
name: Chukchi Test accuracy
value: 27.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Hebrew
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-he")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-he")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-hr | 8b72e6e02e4637a51fa8315b1c170d4c224457d8 | 2022-02-25T09:58:44.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"hr",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-hr | 0 | null | transformers | 36,367 |
---
language:
- hr
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-hr
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 83.7
- type: accuracy
name: Dutch Test accuracy
value: 83.7
- type: accuracy
name: German Test accuracy
value: 83.2
- type: accuracy
name: Italian Test accuracy
value: 83.2
- type: accuracy
name: French Test accuracy
value: 84.2
- type: accuracy
name: Spanish Test accuracy
value: 87.8
- type: accuracy
name: Russian Test accuracy
value: 91.4
- type: accuracy
name: Swedish Test accuracy
value: 85.4
- type: accuracy
name: Norwegian Test accuracy
value: 79.0
- type: accuracy
name: Danish Test accuracy
value: 83.8
- type: accuracy
name: Low Saxon Test accuracy
value: 43.5
- type: accuracy
name: Akkadian Test accuracy
value: 32.5
- type: accuracy
name: Armenian Test accuracy
value: 84.7
- type: accuracy
name: Welsh Test accuracy
value: 67.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.8
- type: accuracy
name: Albanian Test accuracy
value: 75.2
- type: accuracy
name: Slovenian Test accuracy
value: 87.0
- type: accuracy
name: Guajajara Test accuracy
value: 28.3
- type: accuracy
name: Kurmanji Test accuracy
value: 78.5
- type: accuracy
name: Turkish Test accuracy
value: 75.9
- type: accuracy
name: Finnish Test accuracy
value: 83.2
- type: accuracy
name: Indonesian Test accuracy
value: 81.3
- type: accuracy
name: Ukrainian Test accuracy
value: 93.2
- type: accuracy
name: Polish Test accuracy
value: 92.3
- type: accuracy
name: Portuguese Test accuracy
value: 84.6
- type: accuracy
name: Kazakh Test accuracy
value: 79.4
- type: accuracy
name: Latin Test accuracy
value: 77.4
- type: accuracy
name: Old French Test accuracy
value: 54.3
- type: accuracy
name: Buryat Test accuracy
value: 61.1
- type: accuracy
name: Kaapor Test accuracy
value: 20.0
- type: accuracy
name: Korean Test accuracy
value: 60.7
- type: accuracy
name: Estonian Test accuracy
value: 85.7
- type: accuracy
name: Croatian Test accuracy
value: 98.3
- type: accuracy
name: Gothic Test accuracy
value: 16.5
- type: accuracy
name: Swiss German Test accuracy
value: 44.8
- type: accuracy
name: Assyrian Test accuracy
value: 15.9
- type: accuracy
name: North Sami Test accuracy
value: 35.3
- type: accuracy
name: Naija Test accuracy
value: 39.6
- type: accuracy
name: Latvian Test accuracy
value: 86.5
- type: accuracy
name: Chinese Test accuracy
value: 41.2
- type: accuracy
name: Tagalog Test accuracy
value: 70.9
- type: accuracy
name: Bambara Test accuracy
value: 28.2
- type: accuracy
name: Lithuanian Test accuracy
value: 86.1
- type: accuracy
name: Galician Test accuracy
value: 86.0
- type: accuracy
name: Vietnamese Test accuracy
value: 66.5
- type: accuracy
name: Greek Test accuracy
value: 85.8
- type: accuracy
name: Catalan Test accuracy
value: 85.5
- type: accuracy
name: Czech Test accuracy
value: 94.8
- type: accuracy
name: Erzya Test accuracy
value: 47.2
- type: accuracy
name: Bhojpuri Test accuracy
value: 49.2
- type: accuracy
name: Thai Test accuracy
value: 63.4
- type: accuracy
name: Marathi Test accuracy
value: 87.1
- type: accuracy
name: Basque Test accuracy
value: 75.0
- type: accuracy
name: Slovak Test accuracy
value: 95.0
- type: accuracy
name: Kiche Test accuracy
value: 35.8
- type: accuracy
name: Yoruba Test accuracy
value: 28.5
- type: accuracy
name: Warlpiri Test accuracy
value: 41.3
- type: accuracy
name: Tamil Test accuracy
value: 84.8
- type: accuracy
name: Maltese Test accuracy
value: 23.7
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.1
- type: accuracy
name: Icelandic Test accuracy
value: 79.9
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.9
- type: accuracy
name: Urdu Test accuracy
value: 65.0
- type: accuracy
name: Romanian Test accuracy
value: 82.5
- type: accuracy
name: Persian Test accuracy
value: 79.4
- type: accuracy
name: Apurina Test accuracy
value: 38.4
- type: accuracy
name: Japanese Test accuracy
value: 30.1
- type: accuracy
name: Hungarian Test accuracy
value: 83.8
- type: accuracy
name: Hindi Test accuracy
value: 67.8
- type: accuracy
name: Classical Chinese Test accuracy
value: 27.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.9
- type: accuracy
name: Faroese Test accuracy
value: 77.3
- type: accuracy
name: Sanskrit Test accuracy
value: 35.6
- type: accuracy
name: Livvi Test accuracy
value: 65.5
- type: accuracy
name: Arabic Test accuracy
value: 82.3
- type: accuracy
name: Wolof Test accuracy
value: 32.2
- type: accuracy
name: Bulgarian Test accuracy
value: 92.6
- type: accuracy
name: Akuntsu Test accuracy
value: 37.0
- type: accuracy
name: Makurap Test accuracy
value: 17.8
- type: accuracy
name: Kangri Test accuracy
value: 47.9
- type: accuracy
name: Breton Test accuracy
value: 62.2
- type: accuracy
name: Telugu Test accuracy
value: 82.4
- type: accuracy
name: Cantonese Test accuracy
value: 45.6
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 48.9
- type: accuracy
name: Karelian Test accuracy
value: 71.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 79.4
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.9
- type: accuracy
name: Komi Zyrian Test accuracy
value: 39.6
- type: accuracy
name: Irish Test accuracy
value: 65.4
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 28.8
- type: accuracy
name: Manx Test accuracy
value: 35.7
- type: accuracy
name: Skolt Sami Test accuracy
value: 33.7
- type: accuracy
name: Afrikaans Test accuracy
value: 79.8
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 33.1
- type: accuracy
name: Belarusian Test accuracy
value: 91.6
- type: accuracy
name: Serbian Test accuracy
value: 97.5
- type: accuracy
name: Moksha Test accuracy
value: 45.7
- type: accuracy
name: Western Armenian Test accuracy
value: 77.7
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 57.7
- type: accuracy
name: Khunsari Test accuracy
value: 36.5
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 72.2
- type: accuracy
name: Chukchi Test accuracy
value: 35.4
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Croatian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-hyw | 2d1fe3d93a920f84fabe65fba2549e7af7122c44 | 2022-02-25T09:58:48.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"hyw",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-hyw | 0 | null | transformers | 36,368 |
---
language:
- hyw
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-hyw
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 83.0
- type: accuracy
name: Dutch Test accuracy
value: 81.9
- type: accuracy
name: German Test accuracy
value: 83.9
- type: accuracy
name: Italian Test accuracy
value: 80.9
- type: accuracy
name: French Test accuracy
value: 79.2
- type: accuracy
name: Spanish Test accuracy
value: 80.9
- type: accuracy
name: Russian Test accuracy
value: 89.1
- type: accuracy
name: Swedish Test accuracy
value: 86.2
- type: accuracy
name: Norwegian Test accuracy
value: 80.6
- type: accuracy
name: Danish Test accuracy
value: 84.8
- type: accuracy
name: Low Saxon Test accuracy
value: 56.7
- type: accuracy
name: Akkadian Test accuracy
value: 29.3
- type: accuracy
name: Armenian Test accuracy
value: 90.2
- type: accuracy
name: Welsh Test accuracy
value: 63.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 77.0
- type: accuracy
name: Albanian Test accuracy
value: 83.5
- type: accuracy
name: Slovenian Test accuracy
value: 78.0
- type: accuracy
name: Guajajara Test accuracy
value: 22.7
- type: accuracy
name: Kurmanji Test accuracy
value: 76.7
- type: accuracy
name: Turkish Test accuracy
value: 78.1
- type: accuracy
name: Finnish Test accuracy
value: 84.5
- type: accuracy
name: Indonesian Test accuracy
value: 80.7
- type: accuracy
name: Ukrainian Test accuracy
value: 88.4
- type: accuracy
name: Polish Test accuracy
value: 83.7
- type: accuracy
name: Portuguese Test accuracy
value: 83.1
- type: accuracy
name: Kazakh Test accuracy
value: 85.0
- type: accuracy
name: Latin Test accuracy
value: 79.0
- type: accuracy
name: Old French Test accuracy
value: 58.3
- type: accuracy
name: Buryat Test accuracy
value: 65.4
- type: accuracy
name: Kaapor Test accuracy
value: 16.2
- type: accuracy
name: Korean Test accuracy
value: 62.1
- type: accuracy
name: Estonian Test accuracy
value: 84.6
- type: accuracy
name: Croatian Test accuracy
value: 86.9
- type: accuracy
name: Gothic Test accuracy
value: 24.5
- type: accuracy
name: Swiss German Test accuracy
value: 57.3
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 35.0
- type: accuracy
name: Naija Test accuracy
value: 43.0
- type: accuracy
name: Latvian Test accuracy
value: 87.5
- type: accuracy
name: Chinese Test accuracy
value: 41.7
- type: accuracy
name: Tagalog Test accuracy
value: 68.9
- type: accuracy
name: Bambara Test accuracy
value: 30.7
- type: accuracy
name: Lithuanian Test accuracy
value: 87.2
- type: accuracy
name: Galician Test accuracy
value: 80.9
- type: accuracy
name: Vietnamese Test accuracy
value: 65.0
- type: accuracy
name: Greek Test accuracy
value: 87.6
- type: accuracy
name: Catalan Test accuracy
value: 80.0
- type: accuracy
name: Czech Test accuracy
value: 86.0
- type: accuracy
name: Erzya Test accuracy
value: 47.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 57.8
- type: accuracy
name: Thai Test accuracy
value: 59.9
- type: accuracy
name: Marathi Test accuracy
value: 84.7
- type: accuracy
name: Basque Test accuracy
value: 80.7
- type: accuracy
name: Slovak Test accuracy
value: 86.2
- type: accuracy
name: Kiche Test accuracy
value: 26.5
- type: accuracy
name: Yoruba Test accuracy
value: 24.8
- type: accuracy
name: Warlpiri Test accuracy
value: 38.5
- type: accuracy
name: Tamil Test accuracy
value: 84.2
- type: accuracy
name: Maltese Test accuracy
value: 28.2
- type: accuracy
name: Ancient Greek Test accuracy
value: 68.4
- type: accuracy
name: Icelandic Test accuracy
value: 79.5
- type: accuracy
name: Mbya Guarani Test accuracy
value: 28.7
- type: accuracy
name: Urdu Test accuracy
value: 68.1
- type: accuracy
name: Romanian Test accuracy
value: 82.1
- type: accuracy
name: Persian Test accuracy
value: 74.9
- type: accuracy
name: Apurina Test accuracy
value: 31.9
- type: accuracy
name: Japanese Test accuracy
value: 35.2
- type: accuracy
name: Hungarian Test accuracy
value: 83.7
- type: accuracy
name: Hindi Test accuracy
value: 74.9
- type: accuracy
name: Classical Chinese Test accuracy
value: 26.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 51.5
- type: accuracy
name: Faroese Test accuracy
value: 77.9
- type: accuracy
name: Sanskrit Test accuracy
value: 39.4
- type: accuracy
name: Livvi Test accuracy
value: 67.5
- type: accuracy
name: Arabic Test accuracy
value: 77.6
- type: accuracy
name: Wolof Test accuracy
value: 31.3
- type: accuracy
name: Bulgarian Test accuracy
value: 86.3
- type: accuracy
name: Akuntsu Test accuracy
value: 21.3
- type: accuracy
name: Makurap Test accuracy
value: 11.6
- type: accuracy
name: Kangri Test accuracy
value: 57.8
- type: accuracy
name: Breton Test accuracy
value: 65.4
- type: accuracy
name: Telugu Test accuracy
value: 80.2
- type: accuracy
name: Cantonese Test accuracy
value: 48.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 52.5
- type: accuracy
name: Karelian Test accuracy
value: 72.2
- type: accuracy
name: Upper Sorbian Test accuracy
value: 76.4
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.2
- type: accuracy
name: Irish Test accuracy
value: 61.5
- type: accuracy
name: Nayini Test accuracy
value: 53.8
- type: accuracy
name: Munduruku Test accuracy
value: 12.5
- type: accuracy
name: Manx Test accuracy
value: 29.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 34.2
- type: accuracy
name: Afrikaans Test accuracy
value: 81.7
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 30.8
- type: accuracy
name: Belarusian Test accuracy
value: 89.7
- type: accuracy
name: Serbian Test accuracy
value: 87.1
- type: accuracy
name: Moksha Test accuracy
value: 45.2
- type: accuracy
name: Western Armenian Test accuracy
value: 93.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.8
- type: accuracy
name: Khunsari Test accuracy
value: 43.2
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 76.1
- type: accuracy
name: Chukchi Test accuracy
value: 38.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Western Armenian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-lt | d608a2b1745c2d2d64e8676fe2c1e53e53da82a7 | 2022-02-25T09:58:59.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"lt",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-lt | 0 | null | transformers | 36,369 |
---
language:
- lt
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-lt
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 80.7
- type: accuracy
name: Dutch Test accuracy
value: 80.6
- type: accuracy
name: German Test accuracy
value: 76.0
- type: accuracy
name: Italian Test accuracy
value: 77.8
- type: accuracy
name: French Test accuracy
value: 75.5
- type: accuracy
name: Spanish Test accuracy
value: 79.6
- type: accuracy
name: Russian Test accuracy
value: 88.9
- type: accuracy
name: Swedish Test accuracy
value: 81.6
- type: accuracy
name: Norwegian Test accuracy
value: 76.3
- type: accuracy
name: Danish Test accuracy
value: 78.9
- type: accuracy
name: Low Saxon Test accuracy
value: 52.0
- type: accuracy
name: Akkadian Test accuracy
value: 31.6
- type: accuracy
name: Armenian Test accuracy
value: 84.1
- type: accuracy
name: Welsh Test accuracy
value: 63.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 75.6
- type: accuracy
name: Albanian Test accuracy
value: 76.8
- type: accuracy
name: Slovenian Test accuracy
value: 81.4
- type: accuracy
name: Guajajara Test accuracy
value: 26.7
- type: accuracy
name: Kurmanji Test accuracy
value: 77.1
- type: accuracy
name: Turkish Test accuracy
value: 74.9
- type: accuracy
name: Finnish Test accuracy
value: 83.2
- type: accuracy
name: Indonesian Test accuracy
value: 78.0
- type: accuracy
name: Ukrainian Test accuracy
value: 88.1
- type: accuracy
name: Polish Test accuracy
value: 86.3
- type: accuracy
name: Portuguese Test accuracy
value: 81.6
- type: accuracy
name: Kazakh Test accuracy
value: 83.1
- type: accuracy
name: Latin Test accuracy
value: 78.7
- type: accuracy
name: Old French Test accuracy
value: 56.1
- type: accuracy
name: Buryat Test accuracy
value: 64.3
- type: accuracy
name: Kaapor Test accuracy
value: 22.5
- type: accuracy
name: Korean Test accuracy
value: 64.6
- type: accuracy
name: Estonian Test accuracy
value: 81.5
- type: accuracy
name: Croatian Test accuracy
value: 86.6
- type: accuracy
name: Gothic Test accuracy
value: 22.6
- type: accuracy
name: Swiss German Test accuracy
value: 48.1
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 39.8
- type: accuracy
name: Naija Test accuracy
value: 41.4
- type: accuracy
name: Latvian Test accuracy
value: 89.0
- type: accuracy
name: Chinese Test accuracy
value: 34.4
- type: accuracy
name: Tagalog Test accuracy
value: 73.0
- type: accuracy
name: Bambara Test accuracy
value: 26.4
- type: accuracy
name: Lithuanian Test accuracy
value: 96.1
- type: accuracy
name: Galician Test accuracy
value: 81.1
- type: accuracy
name: Vietnamese Test accuracy
value: 65.3
- type: accuracy
name: Greek Test accuracy
value: 81.8
- type: accuracy
name: Catalan Test accuracy
value: 76.2
- type: accuracy
name: Czech Test accuracy
value: 86.5
- type: accuracy
name: Erzya Test accuracy
value: 48.7
- type: accuracy
name: Bhojpuri Test accuracy
value: 50.9
- type: accuracy
name: Thai Test accuracy
value: 54.5
- type: accuracy
name: Marathi Test accuracy
value: 82.8
- type: accuracy
name: Basque Test accuracy
value: 75.6
- type: accuracy
name: Slovak Test accuracy
value: 88.5
- type: accuracy
name: Kiche Test accuracy
value: 33.5
- type: accuracy
name: Yoruba Test accuracy
value: 24.6
- type: accuracy
name: Warlpiri Test accuracy
value: 44.1
- type: accuracy
name: Tamil Test accuracy
value: 79.1
- type: accuracy
name: Maltese Test accuracy
value: 25.5
- type: accuracy
name: Ancient Greek Test accuracy
value: 65.8
- type: accuracy
name: Icelandic Test accuracy
value: 80.7
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.2
- type: accuracy
name: Urdu Test accuracy
value: 59.1
- type: accuracy
name: Romanian Test accuracy
value: 78.6
- type: accuracy
name: Persian Test accuracy
value: 72.8
- type: accuracy
name: Apurina Test accuracy
value: 42.0
- type: accuracy
name: Japanese Test accuracy
value: 22.9
- type: accuracy
name: Hungarian Test accuracy
value: 76.9
- type: accuracy
name: Hindi Test accuracy
value: 62.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 15.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 48.3
- type: accuracy
name: Faroese Test accuracy
value: 77.3
- type: accuracy
name: Sanskrit Test accuracy
value: 41.0
- type: accuracy
name: Livvi Test accuracy
value: 67.2
- type: accuracy
name: Arabic Test accuracy
value: 73.9
- type: accuracy
name: Wolof Test accuracy
value: 28.0
- type: accuracy
name: Bulgarian Test accuracy
value: 85.9
- type: accuracy
name: Akuntsu Test accuracy
value: 26.0
- type: accuracy
name: Makurap Test accuracy
value: 17.8
- type: accuracy
name: Kangri Test accuracy
value: 50.6
- type: accuracy
name: Breton Test accuracy
value: 60.3
- type: accuracy
name: Telugu Test accuracy
value: 85.0
- type: accuracy
name: Cantonese Test accuracy
value: 39.1
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.6
- type: accuracy
name: Karelian Test accuracy
value: 71.3
- type: accuracy
name: Upper Sorbian Test accuracy
value: 75.7
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 67.0
- type: accuracy
name: Komi Zyrian Test accuracy
value: 43.0
- type: accuracy
name: Irish Test accuracy
value: 60.1
- type: accuracy
name: Nayini Test accuracy
value: 46.2
- type: accuracy
name: Munduruku Test accuracy
value: 18.8
- type: accuracy
name: Manx Test accuracy
value: 33.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 37.3
- type: accuracy
name: Afrikaans Test accuracy
value: 76.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 34.1
- type: accuracy
name: Belarusian Test accuracy
value: 89.1
- type: accuracy
name: Serbian Test accuracy
value: 87.7
- type: accuracy
name: Moksha Test accuracy
value: 46.3
- type: accuracy
name: Western Armenian Test accuracy
value: 75.4
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.2
- type: accuracy
name: Khunsari Test accuracy
value: 39.2
- type: accuracy
name: Hebrew Test accuracy
value: 83.3
- type: accuracy
name: Uyghur Test accuracy
value: 76.6
- type: accuracy
name: Chukchi Test accuracy
value: 35.4
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Lithuanian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lt")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lt")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-lv | 5b08566f22c46a1ebeca9961489d1da64a1dc88f | 2022-02-25T09:59:00.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"lv",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-lv | 0 | null | transformers | 36,370 |
---
language:
- lv
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-lv
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 84.7
- type: accuracy
name: Dutch Test accuracy
value: 85.6
- type: accuracy
name: German Test accuracy
value: 82.5
- type: accuracy
name: Italian Test accuracy
value: 84.3
- type: accuracy
name: French Test accuracy
value: 84.1
- type: accuracy
name: Spanish Test accuracy
value: 84.7
- type: accuracy
name: Russian Test accuracy
value: 92.1
- type: accuracy
name: Swedish Test accuracy
value: 86.8
- type: accuracy
name: Norwegian Test accuracy
value: 81.3
- type: accuracy
name: Danish Test accuracy
value: 86.0
- type: accuracy
name: Low Saxon Test accuracy
value: 51.6
- type: accuracy
name: Akkadian Test accuracy
value: 32.4
- type: accuracy
name: Armenian Test accuracy
value: 87.5
- type: accuracy
name: Welsh Test accuracy
value: 65.4
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.5
- type: accuracy
name: Albanian Test accuracy
value: 75.9
- type: accuracy
name: Slovenian Test accuracy
value: 82.0
- type: accuracy
name: Guajajara Test accuracy
value: 31.1
- type: accuracy
name: Kurmanji Test accuracy
value: 76.5
- type: accuracy
name: Turkish Test accuracy
value: 77.2
- type: accuracy
name: Finnish Test accuracy
value: 85.9
- type: accuracy
name: Indonesian Test accuracy
value: 79.3
- type: accuracy
name: Ukrainian Test accuracy
value: 91.1
- type: accuracy
name: Polish Test accuracy
value: 88.5
- type: accuracy
name: Portuguese Test accuracy
value: 84.9
- type: accuracy
name: Kazakh Test accuracy
value: 83.8
- type: accuracy
name: Latin Test accuracy
value: 81.0
- type: accuracy
name: Old French Test accuracy
value: 56.7
- type: accuracy
name: Buryat Test accuracy
value: 64.8
- type: accuracy
name: Kaapor Test accuracy
value: 25.0
- type: accuracy
name: Korean Test accuracy
value: 65.1
- type: accuracy
name: Estonian Test accuracy
value: 84.7
- type: accuracy
name: Croatian Test accuracy
value: 89.1
- type: accuracy
name: Gothic Test accuracy
value: 23.5
- type: accuracy
name: Swiss German Test accuracy
value: 45.2
- type: accuracy
name: Assyrian Test accuracy
value: 12.8
- type: accuracy
name: North Sami Test accuracy
value: 43.5
- type: accuracy
name: Naija Test accuracy
value: 36.1
- type: accuracy
name: Latvian Test accuracy
value: 96.9
- type: accuracy
name: Chinese Test accuracy
value: 53.1
- type: accuracy
name: Tagalog Test accuracy
value: 72.7
- type: accuracy
name: Bambara Test accuracy
value: 28.6
- type: accuracy
name: Lithuanian Test accuracy
value: 91.0
- type: accuracy
name: Galician Test accuracy
value: 84.2
- type: accuracy
name: Vietnamese Test accuracy
value: 65.7
- type: accuracy
name: Greek Test accuracy
value: 84.5
- type: accuracy
name: Catalan Test accuracy
value: 83.2
- type: accuracy
name: Czech Test accuracy
value: 88.0
- type: accuracy
name: Erzya Test accuracy
value: 52.5
- type: accuracy
name: Bhojpuri Test accuracy
value: 49.2
- type: accuracy
name: Thai Test accuracy
value: 63.3
- type: accuracy
name: Marathi Test accuracy
value: 85.3
- type: accuracy
name: Basque Test accuracy
value: 77.4
- type: accuracy
name: Slovak Test accuracy
value: 87.8
- type: accuracy
name: Kiche Test accuracy
value: 40.3
- type: accuracy
name: Yoruba Test accuracy
value: 28.4
- type: accuracy
name: Warlpiri Test accuracy
value: 44.9
- type: accuracy
name: Tamil Test accuracy
value: 86.4
- type: accuracy
name: Maltese Test accuracy
value: 25.9
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.2
- type: accuracy
name: Icelandic Test accuracy
value: 81.7
- type: accuracy
name: Mbya Guarani Test accuracy
value: 35.3
- type: accuracy
name: Urdu Test accuracy
value: 61.9
- type: accuracy
name: Romanian Test accuracy
value: 82.2
- type: accuracy
name: Persian Test accuracy
value: 74.8
- type: accuracy
name: Apurina Test accuracy
value: 49.0
- type: accuracy
name: Japanese Test accuracy
value: 39.4
- type: accuracy
name: Hungarian Test accuracy
value: 79.9
- type: accuracy
name: Hindi Test accuracy
value: 64.1
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 51.7
- type: accuracy
name: Faroese Test accuracy
value: 76.2
- type: accuracy
name: Sanskrit Test accuracy
value: 39.7
- type: accuracy
name: Livvi Test accuracy
value: 67.7
- type: accuracy
name: Arabic Test accuracy
value: 79.4
- type: accuracy
name: Wolof Test accuracy
value: 31.7
- type: accuracy
name: Bulgarian Test accuracy
value: 89.0
- type: accuracy
name: Akuntsu Test accuracy
value: 35.5
- type: accuracy
name: Makurap Test accuracy
value: 20.5
- type: accuracy
name: Kangri Test accuracy
value: 50.6
- type: accuracy
name: Breton Test accuracy
value: 62.7
- type: accuracy
name: Telugu Test accuracy
value: 87.8
- type: accuracy
name: Cantonese Test accuracy
value: 50.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 49.3
- type: accuracy
name: Karelian Test accuracy
value: 72.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 75.6
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.5
- type: accuracy
name: Irish Test accuracy
value: 64.7
- type: accuracy
name: Nayini Test accuracy
value: 39.7
- type: accuracy
name: Munduruku Test accuracy
value: 26.0
- type: accuracy
name: Manx Test accuracy
value: 37.9
- type: accuracy
name: Skolt Sami Test accuracy
value: 34.7
- type: accuracy
name: Afrikaans Test accuracy
value: 81.6
- type: accuracy
name: Old Turkish Test accuracy
value: 22.6
- type: accuracy
name: Tupinamba Test accuracy
value: 40.6
- type: accuracy
name: Belarusian Test accuracy
value: 91.8
- type: accuracy
name: Serbian Test accuracy
value: 89.7
- type: accuracy
name: Moksha Test accuracy
value: 48.7
- type: accuracy
name: Western Armenian Test accuracy
value: 77.5
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 58.1
- type: accuracy
name: Khunsari Test accuracy
value: 40.5
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 79.7
- type: accuracy
name: Chukchi Test accuracy
value: 37.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Latvian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lv")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lv")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-ta | b52577ee75a2250d4135204b9ef9b42ff2862ac0 | 2022-02-25T09:59:28.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ta",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ta | 0 | null | transformers | 36,371 |
---
language:
- ta
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ta
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 68.1
- type: accuracy
name: Dutch Test accuracy
value: 64.0
- type: accuracy
name: German Test accuracy
value: 65.8
- type: accuracy
name: Italian Test accuracy
value: 61.2
- type: accuracy
name: French Test accuracy
value: 56.9
- type: accuracy
name: Spanish Test accuracy
value: 59.5
- type: accuracy
name: Russian Test accuracy
value: 74.3
- type: accuracy
name: Swedish Test accuracy
value: 69.1
- type: accuracy
name: Norwegian Test accuracy
value: 64.8
- type: accuracy
name: Danish Test accuracy
value: 70.0
- type: accuracy
name: Low Saxon Test accuracy
value: 46.9
- type: accuracy
name: Akkadian Test accuracy
value: 28.4
- type: accuracy
name: Armenian Test accuracy
value: 76.5
- type: accuracy
name: Welsh Test accuracy
value: 54.2
- type: accuracy
name: Old East Slavic Test accuracy
value: 61.8
- type: accuracy
name: Albanian Test accuracy
value: 61.0
- type: accuracy
name: Slovenian Test accuracy
value: 59.8
- type: accuracy
name: Guajajara Test accuracy
value: 22.7
- type: accuracy
name: Kurmanji Test accuracy
value: 64.1
- type: accuracy
name: Turkish Test accuracy
value: 72.0
- type: accuracy
name: Finnish Test accuracy
value: 76.2
- type: accuracy
name: Indonesian Test accuracy
value: 70.3
- type: accuracy
name: Ukrainian Test accuracy
value: 75.5
- type: accuracy
name: Polish Test accuracy
value: 72.0
- type: accuracy
name: Portuguese Test accuracy
value: 65.9
- type: accuracy
name: Kazakh Test accuracy
value: 77.2
- type: accuracy
name: Latin Test accuracy
value: 67.8
- type: accuracy
name: Old French Test accuracy
value: 45.0
- type: accuracy
name: Buryat Test accuracy
value: 58.8
- type: accuracy
name: Kaapor Test accuracy
value: 21.2
- type: accuracy
name: Korean Test accuracy
value: 58.6
- type: accuracy
name: Estonian Test accuracy
value: 78.5
- type: accuracy
name: Croatian Test accuracy
value: 71.3
- type: accuracy
name: Gothic Test accuracy
value: 18.2
- type: accuracy
name: Swiss German Test accuracy
value: 44.1
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 34.9
- type: accuracy
name: Naija Test accuracy
value: 37.5
- type: accuracy
name: Latvian Test accuracy
value: 79.2
- type: accuracy
name: Chinese Test accuracy
value: 47.9
- type: accuracy
name: Tagalog Test accuracy
value: 65.6
- type: accuracy
name: Bambara Test accuracy
value: 22.8
- type: accuracy
name: Lithuanian Test accuracy
value: 77.8
- type: accuracy
name: Galician Test accuracy
value: 61.9
- type: accuracy
name: Vietnamese Test accuracy
value: 56.1
- type: accuracy
name: Greek Test accuracy
value: 63.5
- type: accuracy
name: Catalan Test accuracy
value: 57.6
- type: accuracy
name: Czech Test accuracy
value: 71.7
- type: accuracy
name: Erzya Test accuracy
value: 43.5
- type: accuracy
name: Bhojpuri Test accuracy
value: 55.6
- type: accuracy
name: Thai Test accuracy
value: 56.7
- type: accuracy
name: Marathi Test accuracy
value: 79.1
- type: accuracy
name: Basque Test accuracy
value: 74.3
- type: accuracy
name: Slovak Test accuracy
value: 71.9
- type: accuracy
name: Kiche Test accuracy
value: 28.3
- type: accuracy
name: Yoruba Test accuracy
value: 22.3
- type: accuracy
name: Warlpiri Test accuracy
value: 32.4
- type: accuracy
name: Tamil Test accuracy
value: 85.6
- type: accuracy
name: Maltese Test accuracy
value: 23.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 52.9
- type: accuracy
name: Icelandic Test accuracy
value: 67.9
- type: accuracy
name: Mbya Guarani Test accuracy
value: 28.5
- type: accuracy
name: Urdu Test accuracy
value: 69.0
- type: accuracy
name: Romanian Test accuracy
value: 65.5
- type: accuracy
name: Persian Test accuracy
value: 60.0
- type: accuracy
name: Apurina Test accuracy
value: 32.7
- type: accuracy
name: Japanese Test accuracy
value: 42.3
- type: accuracy
name: Hungarian Test accuracy
value: 69.8
- type: accuracy
name: Hindi Test accuracy
value: 73.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.3
- type: accuracy
name: Komi Permyak Test accuracy
value: 40.2
- type: accuracy
name: Faroese Test accuracy
value: 59.9
- type: accuracy
name: Sanskrit Test accuracy
value: 36.9
- type: accuracy
name: Livvi Test accuracy
value: 61.4
- type: accuracy
name: Arabic Test accuracy
value: 62.9
- type: accuracy
name: Wolof Test accuracy
value: 28.3
- type: accuracy
name: Bulgarian Test accuracy
value: 71.6
- type: accuracy
name: Akuntsu Test accuracy
value: 19.3
- type: accuracy
name: Makurap Test accuracy
value: 12.3
- type: accuracy
name: Kangri Test accuracy
value: 51.6
- type: accuracy
name: Breton Test accuracy
value: 51.7
- type: accuracy
name: Telugu Test accuracy
value: 83.2
- type: accuracy
name: Cantonese Test accuracy
value: 50.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 45.7
- type: accuracy
name: Karelian Test accuracy
value: 63.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 62.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 57.5
- type: accuracy
name: Komi Zyrian Test accuracy
value: 35.3
- type: accuracy
name: Irish Test accuracy
value: 58.2
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 15.9
- type: accuracy
name: Manx Test accuracy
value: 26.5
- type: accuracy
name: Skolt Sami Test accuracy
value: 32.7
- type: accuracy
name: Afrikaans Test accuracy
value: 66.5
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 27.8
- type: accuracy
name: Belarusian Test accuracy
value: 76.9
- type: accuracy
name: Serbian Test accuracy
value: 71.6
- type: accuracy
name: Moksha Test accuracy
value: 39.2
- type: accuracy
name: Western Armenian Test accuracy
value: 70.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 50.2
- type: accuracy
name: Khunsari Test accuracy
value: 39.2
- type: accuracy
name: Hebrew Test accuracy
value: 81.2
- type: accuracy
name: Uyghur Test accuracy
value: 67.3
- type: accuracy
name: Chukchi Test accuracy
value: 33.6
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Tamil
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ta")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ta")
```
|
mohamed-illiyas/wav2vec-malayalam | 1dcc0f9033e7dde9ef935bf3f5c8f24c45c19956 | 2022-02-28T16:07:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | mohamed-illiyas | null | mohamed-illiyas/wav2vec-malayalam | 0 | null | transformers | 36,372 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-malayalam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-malayalam
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 1.18.3
- Tokenizers 0.10.3
|
zfchen/codeparrot | 54737857ba1f8cc3b41c545642fa9f6f93694f44 | 2022-02-24T14:52:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | zfchen | null | zfchen/codeparrot | 0 | null | transformers | 36,373 | Entry not found |
vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k | 153dc40a3ce127c6a0537940eda26d0dad5659f2 | 2022-02-24T19:08:20.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | vocab-transformers | null | vocab-transformers/dense_encoder-msmarco-distilbert-word2vec256k | 0 | null | sentence-transformers | 36,374 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# dense_encoder-msmarco-distilbert-word2vec256k
This model is based on [msmarco-word2vec256000-distilbert-base-uncased](https://huggingface.co/nicoladecao/msmarco-word2vec256000-distilbert-base-uncased) with a 256k sized vocabulary initialized with word2vec.
It has been trained on MS MARCO using [MarginMSELoss](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/ms_marco/train_bi-encoder_margin-mse.py). See the train_script.py in this repository.
Performance:
- MS MARCO dev: - (MRR@10)
- TREC-DL 2019: 65.53 (nDCG@10)
- TREC-DL 2020: 67.42 (nDCG@10)
- Avg. on 4 BEIR datasets: 38.97
The word embedding matrix has been frozen while training.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7858 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vesteinn/clip-nabirds | 99805933b55c5ee1a384aefdfac664bc6a8ac150 | 2022-02-27T22:40:41.000Z | [
"pytorch",
"clip",
"feature-extraction",
"transformers"
] | feature-extraction | false | vesteinn | null | vesteinn/clip-nabirds | 0 | null | transformers | 36,375 | Entry not found |
huggingtweets/dril-nia_mp4 | 875388556d09a55ec7f566dc1a05afb713c356eb | 2022-02-25T19:44:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dril-nia_mp4 | 0 | null | transformers | 36,376 | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-nia_mp4/1645818279249/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487740104340918272/7c9spp2E_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nia & wint</div>
<div style="text-align: center; font-size: 14px;">@dril-nia_mp4</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nia & wint.
| Data | Nia | wint |
| --- | --- | --- |
| Tweets downloaded | 278 | 3229 |
| Retweets | 12 | 473 |
| Short tweets | 13 | 300 |
| Tweets kept | 253 | 2456 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ybk5oh0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-nia_mp4's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ny6aucf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ny6aucf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-nia_mp4')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nadaAlnada/wav2vec2-base-timit-demo-colab | 5f1daa05183ad42c85e378b91af603d602cbbd31 | 2022-02-27T13:55:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nadaAlnada | null | nadaAlnada/wav2vec2-base-timit-demo-colab | 0 | null | transformers | 36,377 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [anas/wav2vec2-large-xlsr-arabic](https://huggingface.co/anas/wav2vec2-large-xlsr-arabic) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hcy11/distilbert-base-uncased-finetuned-squad | 21d2fd9c156841c916a2bfd0b7e5fd9deefdadff | 2022-03-02T20:32:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | hcy11 | null | hcy11/distilbert-base-uncased-finetuned-squad | 0 | null | transformers | 36,378 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2672 | 1.0 | 5533 | 1.2131 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
zhoutong/best-t5 | 7debf4563bc6d9a9dff89ccf79bbe3510398bb97 | 2022-02-26T07:27:55.000Z | [
"pytorch"
] | null | false | zhoutong | null | zhoutong/best-t5 | 0 | null | null | 36,379 | Entry not found |
ianc89/hagrid | 8589d4146189b95a75fb2c329cba312a492f21d6 | 2022-02-26T13:52:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ianc89 | null | ianc89/hagrid | 0 | null | transformers | 36,380 | ---
tags:
- conversational
---
# My Awesome Model |
nimrah/wav2vec2-large-xls-r-300m-my_hindi_home-colab | a5b877c919ab676748bb54a993f93d1d2455f28b | 2022-02-26T17:11:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nimrah | null | nimrah/wav2vec2-large-xls-r-300m-my_hindi_home-colab | 0 | null | transformers | 36,381 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-my_hindi_home-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-my_hindi_home-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingtweets/claresiobhan | 23b5676b74c9dcb7c46edf5e8e6f38cc8eea61a7 | 2022-02-26T22:19:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/claresiobhan | 0 | null | transformers | 36,382 | ---
language: en
thumbnail: http://www.huggingtweets.com/claresiobhan/1645913945953/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1296785738978201600/J9LDndke_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">👰Clare Siobhán👰</div>
<div style="text-align: center; font-size: 14px;">@claresiobhan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 👰Clare Siobhán👰.
| Data | 👰Clare Siobhán👰 |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 110 |
| Short tweets | 504 |
| Tweets kept | 2635 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vq9maap/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @claresiobhan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/375bmhre) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/375bmhre/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/claresiobhan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/zeebeecat01 | 150bdb14347a4bdbbbf8d1621cae6c5ed2d73260 | 2022-02-26T22:24:18.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/zeebeecat01 | 0 | null | transformers | 36,383 | ---
language: en
thumbnail: http://www.huggingtweets.com/zeebeecat01/1645914254405/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1103665627183472642/OVXzwAk7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shreya Mukherjee 💀🌻</div>
<div style="text-align: center; font-size: 14px;">@zeebeecat01</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shreya Mukherjee 💀🌻.
| Data | Shreya Mukherjee 💀🌻 |
| --- | --- |
| Tweets downloaded | 731 |
| Retweets | 552 |
| Short tweets | 33 |
| Tweets kept | 146 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kz1pvshu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zeebeecat01's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3btkttwk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3btkttwk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/zeebeecat01')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jiobiala24/wav2vec2-base-checkpoint-13 | 5c89362023b8b279eca9db1fe7ce75fde2cdae64 | 2022-02-27T12:36:13.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jiobiala24 | null | jiobiala24/wav2vec2-base-checkpoint-13 | 0 | null | transformers | 36,384 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-13
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-12](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-12) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1804
- Wer: 0.3809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2688 | 1.92 | 1000 | 0.6518 | 0.3692 |
| 0.1944 | 3.85 | 2000 | 0.7188 | 0.3808 |
| 0.1503 | 5.77 | 3000 | 0.7552 | 0.3853 |
| 0.1218 | 7.69 | 4000 | 0.8155 | 0.3834 |
| 0.1024 | 9.62 | 5000 | 0.8867 | 0.3779 |
| 0.0874 | 11.54 | 6000 | 0.8917 | 0.3866 |
| 0.0775 | 13.46 | 7000 | 1.0320 | 0.4019 |
| 0.0712 | 15.38 | 8000 | 1.0110 | 0.3922 |
| 0.0656 | 17.31 | 9000 | 1.0494 | 0.3885 |
| 0.0578 | 19.23 | 10000 | 1.1054 | 0.3883 |
| 0.053 | 21.15 | 11000 | 1.1285 | 0.3938 |
| 0.0496 | 23.08 | 12000 | 1.1358 | 0.3884 |
| 0.0459 | 25.0 | 13000 | 1.2062 | 0.3904 |
| 0.0445 | 26.92 | 14000 | 1.1811 | 0.3830 |
| 0.0414 | 28.85 | 15000 | 1.1804 | 0.3809 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jt360/mt5-small-finetuned-amazon-en-es-video-games | c616e485a6c72203248922966d17c75ba839fdd7 | 2022-02-27T18:43:57.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | jt360 | null | jt360/mt5-small-finetuned-amazon-en-es-video-games | 0 | null | transformers | 36,385 | ---
license: afl-3.0
---
|
flairbook2/flairmodel | dba1512be5619b7a11812b492b1a7d37f6639188 | 2022-04-09T16:58:21.000Z | [
"pytorch",
"flair",
"token-classification"
] | token-classification | false | flairbook2 | null | flairbook2/flairmodel | 0 | null | flair | 36,386 | ---
tags:
- flair
- token-classification
widget:
- text: "does this work"
---
## Test model README
Some test README description |
mipatov/rugpt3_nb_descr | 268609ceb23151c73dbf07578fc23bbc51988240 | 2022-02-27T23:44:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mipatov | null | mipatov/rugpt3_nb_descr | 0 | null | transformers | 36,387 | based on `sberbank-ai/rugpt3medium_based_on_gpt2`
finetuned for generate text description for notebook-devices |
facebook/wav2vec2-base-sv-voxpopuli-v2 | 36445212b2499f538da33aba9a1475981c82ed69 | 2022-02-27T13:13:27.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"sv",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-sv-voxpopuli-v2 | 0 | null | transformers | 36,388 | ---
language: sv
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sv** on **16.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-west_germanic-voxpopuli-v2 | ba2575ca08f92658b827cb034da9ad4b5e3d56d9 | 2022-02-27T12:35:16.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"west_germanic",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-west_germanic-voxpopuli-v2 | 0 | null | transformers | 36,389 | ---
language: west_germanic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **west_germanic** on **66.3** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **west_germanic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-north_germanic-voxpopuli-v2 | 2f451c30b28238d13b1c54bbca1f4e6c241a5304 | 2022-02-27T12:37:56.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"north_germanic",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-north_germanic-voxpopuli-v2 | 0 | null | transformers | 36,390 | ---
language: north_germanic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **north_germanic** on **29.9** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **north_germanic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-slavic-voxpopuli-v2 | fa95b126a53cb5b6803f6e7c3f77693525b97eff | 2022-02-27T12:40:42.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"slavic",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-slavic-voxpopuli-v2 | 0 | null | transformers | 36,391 | ---
language: slavic
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **slavic** on **88.99999999999999** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **slavic**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-base-sl-voxpopuli-v2 | 0411d3dd04058c9d1547ece5f29e9f107e907930 | 2022-02-27T13:14:49.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"sl",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-sl-voxpopuli-v2 | 0 | null | transformers | 36,392 | ---
language: sl
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sl** on **11.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-mt-voxpopuli-v2 | 51ecaa56badc4d0aa1ad5d3d196e605e1369b31c | 2022-02-27T12:51:06.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"mt",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-mt-voxpopuli-v2 | 0 | null | transformers | 36,393 | ---
language: mt
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **mt** on **9.1** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **mt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
facebook/wav2vec2-large-el-voxpopuli-v2 | 362fc887d5d4854e0299be872e30015396a38a2c | 2022-02-27T12:48:30.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"el",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-el-voxpopuli-v2 | 0 | null | transformers | 36,394 | ---
language: el
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-large-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained only in **el** on **17.7** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
spy24/autonlp-UK-to-US-600416931 | eea0caf05b11c517c67f1ce46d7028cce22d3b17 | 2022-02-28T09:59:04.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-UK-to-US",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-UK-to-US-600416931 | 0 | 1 | transformers | 36,395 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-UK-to-US
co2_eq_emissions: 1.113131499202784
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 600416931
- CO2 Emissions (in grams): 1.113131499202784
## Validation Metrics
- Loss: 1.8278849124908447
- Rouge1: 45.7945
- Rouge2: 8.5245
- RougeL: 45.8031
- RougeLsum: 45.9067
- Gen Len: 3.0622
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-UK-to-US-600416931
``` |
spy24/autonlp-AUS-to-US-601516964 | 58c67b6f95797e1ba26bd30dcb6d02dd133b04ef | 2022-02-28T11:21:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-AUS-to-US",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | spy24 | null | spy24/autonlp-AUS-to-US-601516964 | 0 | null | transformers | 36,396 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-AUS-to-US
co2_eq_emissions: 3.3930796843275846
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 601516964
- CO2 Emissions (in grams): 3.3930796843275846
## Validation Metrics
- Loss: 1.9823806285858154
- Rouge1: 42.8783
- Rouge2: 7.4603
- RougeL: 42.8492
- RougeLsum: 43.0556
- Gen Len: 2.8952
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-AUS-to-US-601516964
``` |
rockyend/distilbert-base-uncased-finetuned-ner | 1a025a2170be4f9fd5e5cf7353729c0a1bb5b023 | 2022-02-28T15:45:56.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | rockyend | null | rockyend/distilbert-base-uncased-finetuned-ner | 0 | null | transformers | 36,397 | Entry not found |
peterhsu/test-bert-finetuned-squad-accelerate | bd5b6de35c21408d3ca5aa77302d7a5eaa419721 | 2022-02-28T18:47:40.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | peterhsu | null | peterhsu/test-bert-finetuned-squad-accelerate | 0 | null | transformers | 36,398 | Entry not found |
nateraw/cryptopunks-gan | 1eb7a477ae62ddf76aa01598347797aa2f3a248f | 2022-03-01T01:59:49.000Z | [
"tensorboard",
"pytorch",
"dcgan"
] | null | false | nateraw | null | nateraw/cryptopunks-gan | 0 | 2 | pytorch | 36,399 | ---
library_name: pytorch
tags:
- dcgan
---
# cryptopunks-gan
A DCGAN trained to generate novel Cryptopunks.
Check out the code by Teddy Koker [here](https://github.com/teddykoker/cryptopunks-gan).
## Generated Punks
Here are some punks generated by this model:

## Usage
You can try it out yourself, or you can play with the [demo](https://huggingface.co/spaces/nateraw/cryptopunks-generator).
To use it yourself - make sure you have `torch`, `torchvision`, and `huggingface_hub` installed. Then, run the following to generate a grid of 64 random punks:
```python
import torch
from huggingface_hub import hf_hub_download
from torch import nn
from torchvision.utils import save_image
class Generator(nn.Module):
def __init__(self, nc=4, nz=100, ngf=64):
super(Generator, self).__init__()
self.network = nn.Sequential(
nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),
nn.BatchNorm2d(ngf),
nn.ReLU(True),
nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
nn.Tanh(),
)
def forward(self, input):
output = self.network(input)
return output
model = Generator()
weights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')
model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu')))
out = model(torch.randn(64, 100, 1, 1))
save_image(out, "punks.png", normalize=True)
```
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.