modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
xkang/distilbert-base-uncased-finetuned-imdb | 93dc187c4e9322cbe239b3501f30d57997a34474 | 2021-12-27T07:30:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | xkang | null | xkang/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 30,500 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7096 | 1.0 | 157 | 2.4920 |
| 2.5741 | 2.0 | 314 | 2.4237 |
| 2.5386 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
xkang/dummy-model | 7b760ad72655b5822aade87e3a07d47d303fd052 | 2021-12-03T01:22:28.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | xkang | null | xkang/dummy-model | 1 | null | transformers | 30,501 | Entry not found |
xxr/bert-base-uncased-issues-128 | fe3aa1bf0ebaccde9490ece01c93d317114fc27a | 2022-02-15T14:09:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | xxr | null | xxr/bert-base-uncased-issues-128 | 1 | null | transformers | 30,502 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-issues-128
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9845 | 1.0 | 1163 | 1.6403 |
| 1.5695 | 2.0 | 2326 | 1.4212 |
| 1.4221 | 3.0 | 3489 | 1.3714 |
| 1.3302 | 4.0 | 4652 | 1.3592 |
| 1.2734 | 5.0 | 5815 | 1.2781 |
| 1.2143 | 6.0 | 6978 | 1.2286 |
| 1.1704 | 7.0 | 8141 | 1.2492 |
| 1.1261 | 8.0 | 9304 | 1.2044 |
| 1.0812 | 9.0 | 10467 | 1.1878 |
| 1.0657 | 10.0 | 11630 | 1.2177 |
| 1.0319 | 11.0 | 12793 | 1.1428 |
| 1.0063 | 12.0 | 13956 | 1.0910 |
| 0.9731 | 13.0 | 15119 | 1.1111 |
| 0.9674 | 14.0 | 16282 | 1.1699 |
| 0.9391 | 15.0 | 17445 | 1.0805 |
| 0.9381 | 16.0 | 18608 | 1.2109 |
### Framework versions
- Transformers 4.8.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
xysmalobia/t5-finetuned-amazon-en | 406e92567971566dc823a255beb3ceb2190e0284 | 2021-11-14T17:41:19.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | xysmalobia | null | xysmalobia/t5-finetuned-amazon-en | 1 | null | transformers | 30,503 | Entry not found |
yahya1994/DialoGPT-small-AOT-Eren | a65dc5e23a8a7f352542efc6adb45df678a551ab | 2021-09-08T19:49:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | yahya1994 | null | yahya1994/DialoGPT-small-AOT-Eren | 1 | null | transformers | 30,504 | ---
tags:
- conversational
---
# Eren dialog |
yahya1994/DialoGPT-small-Parasyte-Migi | 3dd670c3a04a795e82b0a01bfc04f785d50956a6 | 2021-09-04T18:09:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | yahya1994 | null | yahya1994/DialoGPT-small-Parasyte-Migi | 1 | null | transformers | 30,505 | ---
tags:
- conversational
---
# Migi dialog |
yahya1994/DialoGPT-small-ReZero-Rem | ec939b9b956866015ee78e7a265d3cd1ca8f97bc | 2021-09-09T00:23:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | yahya1994 | null | yahya1994/DialoGPT-small-ReZero-Rem | 1 | null | transformers | 30,506 | ---
tags:
- conversational
---
# Rem dialog |
yancong/dummy-model | e27b408374aeebe976b463ba68c2689bea7d785b | 2021-07-24T23:56:58.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yancong | null | yancong/dummy-model | 1 | null | transformers | 30,507 | Entry not found |
yarik921/Teflon_0.2 | 9014569f78c704993511546da7ba118fa1c2666d | 2022-02-18T12:44:03.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | yarik921 | null | yarik921/Teflon_0.2 | 1 | null | transformers | 30,508 | Entry not found |
yazdipour/sparql-qald9-t5-small-2021-10-19_00-01 | c891ac2dcab7315223ca8c9d817b61c3508b7cc1 | 2021-10-19T00:13:21.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/sparql-qald9-t5-small-2021-10-19_00-01 | 1 | null | transformers | 30,509 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_00-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_00-01
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-small-2021-10-18_23-00](https://huggingface.co/yazdipour/text-to-sparql-t5-small-2021-10-18_23-00) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-------------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.4058 | 19.0 | 0.3946 | 0.0660 | 0.2253 | 9.8438 | [72.36042012161415, 47.920433996383366, 33.929754804506295, 26.416482707873435] | 0.2344 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/sparql-qald9-t5-small-2021-10-19_07-12_RAW | b0f36a4222512ac18988d33ec78822f80900d8e5 | 2021-10-19T07:25:13.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/sparql-qald9-t5-small-2021-10-19_07-12_RAW | 1 | null | transformers | 30,510 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_07-12_RAW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_07-12_RAW
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.8581 | 19.0 | 0.3301 | 0.0433 | 0.1830 | 7.5917 | [69.82603479304139, 45.68226763348714, 32.33357717629846, 24.56861133935908] | 0.1903 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-base-2021-10-18_16-15 | 29e577318f9cddf8dd0d199df6da1b8502f1d6b8 | 2021-10-18T18:58:01.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-base-2021-10-18_16-15 | 1 | null | transformers | 30,511 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-base-2021-10-18_16-15
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-18_16-15
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1294
- Gen Len: 19.0
- Bertscorer-p: 0.5827
- Bertscorer-r: 0.0812
- Bertscorer-f1: 0.3202
- Sacrebleu-score: 5.9410
- Sacrebleu-precisions: [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601]
- Bleu-bp: 0.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4772 | 0.1294 | 19.0 | 0.5827 | 0.0812 | 0.3202 | 5.9410 | [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601] | 0.0721 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small-2021-10-15_01-00 | 371c071cdbf1c4230db732832950c8f70b9a6a05 | 2021-10-15T15:19:59.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-small-2021-10-15_01-00 | 1 | null | transformers | 30,512 | ---
tags:
- generated_from_trainer
model-index:
- name: text-to-sparql-t5-small-2021-10-15_01-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-15_01-00
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:------:|:----------:|:-----------------------------------------------------------------:|:-------:|
| No log | 1.0 | 26 | 4.1488 | 19.0 | 0.2368 | -0.0304 | 0.1003 | 0.8868 | [56.84848484848485, 25.0, 8.88888888888889, 0.041666666666666664] | 0.1851 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.2
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small-2021-10-18_12-12 | e0ae3b1802c899d3ab41b2376a4e902abc776e54 | 2021-10-18T13:14:26.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-small-2021-10-18_12-12 | 1 | null | transformers | 30,513 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_12-12
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_12-12
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Gen Len: 19.0
- Bertscorer-p: 0.5420
- Bertscorer-r: 0.0732
- Bertscorer-f1: 0.2972
- Sacrebleu-score: 4.8763
- Sacrebleu-precisions: [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785]
- Bleu-bp: 0.0697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.4209 | 1.0 | 4772 | 0.3284 | 19.0 | 0.5420 | 0.0732 | 0.2972 | 4.8763 | [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785] | 0.0697 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yazdipour/text-to-sparql-t5-small-2021-10-18_23-00 | bf722627d1fb7036e35cc07ad951d8156b44614d | 2021-10-19T00:01:17.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-small-2021-10-18_23-00 | 1 | null | transformers | 30,514 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_23-00
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_23-00
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2284
- Gen Len: 19.0
- Bertscorer-p: 0.5644
- Bertscorer-r: 0.0815
- Bertscorer-f1: 0.3120
- Sacrebleu-score: 5.5690
- Sacrebleu-precisions: [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607]
- Bleu-bp: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:---------------------------------------------------------------------------:|:-------:|
| 0.2808 | 1.0 | 4772 | 0.2284 | 19.0 | 0.5644 | 0.0815 | 0.3120 | 5.5690 | [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] | 0.0728 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ydl233/bart_model | f32b20a9e81f7abb89bffffc772489fc49c0d87c | 2021-09-08T06:40:12.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ydl233 | null | ydl233/bart_model | 1 | null | transformers | 30,515 | Entry not found |
yfyang/wav2vec2-base-timit-fine-tuned | 83276625a54f7a4bafbb3345550037ca4bfd0f42 | 2021-11-04T08:21:31.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | yfyang | null | yfyang/wav2vec2-base-timit-fine-tuned | 1 | null | transformers | 30,516 | Entry not found |
yhk04150/yhkBERT | e1189d5bd65da8f738d7452f0504781aeacf07fa | 2021-05-20T09:28:34.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yhk04150 | null | yhk04150/yhkBERT | 1 | null | transformers | 30,517 | Entry not found |
ying-tina/wav2vec2-base-timit-demo-colab-32 | 454b4455e24da308a0c2d7b57cc4c1c7b4378339 | 2021-12-01T10:54:26.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/wav2vec2-base-timit-demo-colab-32 | 1 | null | transformers | 30,518 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-base-timit-demo-colab-32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-32
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6155 | 4.0 | 500 | 2.2647 | 0.9992 |
| 0.9037 | 8.0 | 1000 | 0.4701 | 0.4336 |
| 0.3159 | 12.0 | 1500 | 0.4247 | 0.3575 |
| 0.1877 | 16.0 | 2000 | 0.4477 | 0.3442 |
| 0.1368 | 20.0 | 2500 | 0.4932 | 0.3384 |
| 0.1062 | 24.0 | 3000 | 0.4758 | 0.3202 |
| 0.0928 | 28.0 | 3500 | 0.4488 | 0.3149 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
ying-tina/wav2vec2-base-timit-demo-colab | 3127ccf8f9908770dc926aa2ef2ffe0616c6ac6b | 2021-11-30T10:52:25.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | ying-tina | null | ying-tina/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,519 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5127
- Wer: 0.3082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7645 | 2.01 | 500 | 2.5179 | 0.9999 |
| 1.1873 | 4.02 | 1000 | 0.5464 | 0.4798 |
| 0.46 | 6.02 | 1500 | 0.4625 | 0.4025 |
| 0.2869 | 8.03 | 2000 | 0.4252 | 0.3650 |
| 0.2213 | 10.04 | 2500 | 0.4340 | 0.3585 |
| 0.1905 | 12.05 | 3000 | 0.4310 | 0.3404 |
| 0.1545 | 14.06 | 3500 | 0.4547 | 0.3381 |
| 0.1206 | 16.06 | 4000 | 0.4902 | 0.3384 |
| 0.1116 | 18.07 | 4500 | 0.4767 | 0.3253 |
| 0.0925 | 20.08 | 5000 | 0.5248 | 0.3160 |
| 0.0897 | 22.09 | 5500 | 0.4960 | 0.3126 |
| 0.0687 | 24.1 | 6000 | 0.4876 | 0.3086 |
| 0.063 | 26.1 | 6500 | 0.4895 | 0.3065 |
| 0.0558 | 28.11 | 7000 | 0.5127 | 0.3082 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yliu337/filter_maskQA | 46029c057dbd7233dfad4457387a54cf45392c8a | 2021-08-10T16:48:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/filter_maskQA | 1 | null | transformers | 30,520 | Entry not found |
yliu337/mt5_sliding_window_en | c2075c80301ec8551eeed5f3d4ad24adfcce5402 | 2021-11-14T21:19:16.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/mt5_sliding_window_en | 1 | null | transformers | 30,521 | Entry not found |
yliu337/t5_fillmask_src_hyp_format | 9e83a102b47eca1222ab9d9085558b4c787595b2 | 2021-10-13T02:48:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/t5_fillmask_src_hyp_format | 1 | null | transformers | 30,522 | Entry not found |
yliu337/t5_neg_nonfilter_bothcontext | 5a97e6560532f81db3faadf5dd6dee61beb0472c | 2021-08-23T21:15:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | yliu337 | null | yliu337/t5_neg_nonfilter_bothcontext | 1 | null | transformers | 30,523 | Entry not found |
yoonseob/yaiBERT-v2 | ce46bb76a306ab28d380284e846b14c4ac976999 | 2020-12-04T00:40:42.000Z | [
"pytorch",
"transformers"
] | null | false | yoonseob | null | yoonseob/yaiBERT-v2 | 1 | null | transformers | 30,524 | Entry not found |
yoonseob/yaiBERT | e1355900109029f96a14d49200d8e9f3b4a88cbf | 2020-12-03T17:23:58.000Z | [
"pytorch",
"transformers"
] | null | false | yoonseob | null | yoonseob/yaiBERT | 1 | null | transformers | 30,525 | Entry not found |
yoonseob/ysBERT | dcc45f6203426c0b990a6c410789195776cff950 | 2021-05-20T09:31:54.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | yoonseob | null | yoonseob/ysBERT | 1 | null | transformers | 30,526 | Entry not found |
youngjae/bert-finetuned-squad-accelerate | dd318c1c48258c9d40d282c8e05ee7a24f56c248 | 2021-12-30T05:20:14.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | youngjae | null | youngjae/bert-finetuned-squad-accelerate | 1 | null | transformers | 30,527 | Entry not found |
youngjae/bert-finetuned-squad | a3fdd2131d607621fd606c3002093b2d21625248 | 2021-12-30T04:13:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | youngjae | null | youngjae/bert-finetuned-squad | 1 | null | transformers | 30,528 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ysharma/dummy-model-2 | 6fa13bfd2b75228aca61f1889896ef297e9b6bb3 | 2021-07-12T06:25:53.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ysharma | null | ysharma/dummy-model-2 | 1 | null | transformers | 30,529 | Entry not found |
ytlin/19rdmhqc | 83ea27cff123c650a4455eab1962f56d78ae16b7 | 2020-10-06T06:39:21.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/19rdmhqc | 1 | null | transformers | 30,530 | Entry not found |
ytlin/1pm2c7qw_5 | 4a4164efe8478672b58652e1b8757979efd6b20a | 2021-05-23T13:49:02.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | ytlin | null | ytlin/1pm2c7qw_5 | 1 | null | transformers | 30,531 | Entry not found |
ytlin/1pm2c7qw_6 | 88cb5d98ef1786ef1c7bc636b51d984d690d26a7 | 2021-05-23T13:49:27.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | ytlin | null | ytlin/1pm2c7qw_6 | 1 | null | transformers | 30,532 | Entry not found |
ytlin/329vcm1b_4 | 65f8904790198679fd41cdb4217eb4695c460a9b | 2020-10-05T06:03:46.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/329vcm1b_4 | 1 | null | transformers | 30,533 | Entry not found |
ytlin/35oote4t_52 | 85cfbf115a8c8b5c7219a21f740aba06c5e89455 | 2021-05-23T13:50:14.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | ytlin | null | ytlin/35oote4t_52 | 1 | null | transformers | 30,534 | Entry not found |
ytlin/38hbj3w7_10 | 60ada85699b05a096f5c6918a0699efed17d891f | 2021-05-23T13:50:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/38hbj3w7_10 | 1 | null | transformers | 30,535 | Entry not found |
ytlin/38hbj3w7_13 | b068e56fd7fe244231ec2607a3959a73554fa8bc | 2021-05-23T13:50:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/38hbj3w7_13 | 1 | null | transformers | 30,536 | Entry not found |
ytlin/q4b4siil | 3e7ecdd7fc47f22e0db5175145cad0ed63500ad3 | 2021-05-23T13:52:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/q4b4siil | 1 | null | transformers | 30,537 | Entry not found |
yuchenlin/BART0_CSR | d6dcfeac15e382e9ec7748557abefca9b118b0f9 | 2022-02-02T22:11:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yuchenlin | null | yuchenlin/BART0_CSR | 1 | null | transformers | 30,538 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
TBA |
yunpeng/bert_cn_finetuning | 609ca07a5d75152c2fe7b9b1e451ba31b284ce8c | 2021-11-02T14:50:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yunpeng | null | yunpeng/bert_cn_finetuning | 1 | null | transformers | 30,539 | Entry not found |
yxchar/tlm-ag-medium-scale | 7dfb0969a055e4d937d8aa984e48174846dc19af | 2021-11-04T10:54:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-ag-medium-scale | 1 | null | transformers | 30,540 | Entry not found |
yxchar/tlm-amazon-medium-scale | 0009bf0481abe57ad7cf7443de5c382f613d662b | 2021-11-04T13:29:16.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-amazon-medium-scale | 1 | null | transformers | 30,541 | Entry not found |
yxchar/tlm-chemprot-medium-scale | dd1be1d09f1be4f13cde92aa70cf3cd122d89978 | 2021-11-04T14:17:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-chemprot-medium-scale | 1 | null | transformers | 30,542 | Entry not found |
yxchar/tlm-hyp-medium-scale | 81178f9c020cccb1200f83f788ffc1477ce5f7cb | 2021-11-04T15:30:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-hyp-medium-scale | 1 | null | transformers | 30,543 | Entry not found |
yxchar/tlm-imdb-small-scale | 049daff62c091f89a078d9cdc21a9db9346a25f1 | 2021-11-04T09:34:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-imdb-small-scale | 1 | null | transformers | 30,544 | Entry not found |
yxchar/tlm-rct-20k-small-scale | 98185626a895a2ba88528db1bea4c649978fb9d6 | 2021-11-04T17:13:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-rct-20k-small-scale | 1 | null | transformers | 30,545 | Entry not found |
yxchar/tlm-sciie-large-scale | 7ccffcf6cab1d841085e557fd28743f1921dc828 | 2021-11-04T16:27:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-sciie-large-scale | 1 | null | transformers | 30,546 | Entry not found |
yzhou992/NetMind-20211104-781 | 978e11f554ed79c1a32c0e51493eed683a635c49 | 2021-11-04T08:38:46.000Z | [
"pytorch",
"albert",
"pretraining",
"transformers"
] | null | false | yzhou992 | null | yzhou992/NetMind-20211104-781 | 1 | null | transformers | 30,547 | Entry not found |
yzhou992/test_model | d1edfd479df2c603796eb99d3d24283af613271d | 2021-11-02T08:45:27.000Z | [
"pytorch",
"albert",
"pretraining",
"transformers"
] | null | false | yzhou992 | null | yzhou992/test_model | 1 | null | transformers | 30,548 | Entry not found |
zgotter/test | 9b1166130e23653ba0650e10a3bf97913d284e01 | 2021-09-28T06:48:42.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | zgotter | null | zgotter/test | 1 | null | transformers | 30,549 | Entry not found |
zhaoyang/BertFinetuning | 20d1671bc6b0e1f82b0f5d2b97623fbcc933ebaa | 2021-12-06T08:23:02.000Z | [
"pytorch",
"tensorboard",
"en",
"dataset:glue",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | null | false | zhaoyang | null | zhaoyang/BertFinetuning | 1 | null | null | 30,550 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert_finetunning
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8786324786324787
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_finetunning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4018
- Accuracy: 0.8260
- F1: 0.8786
- Combined Score: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
zharry29/goal_benchmark_roberta | 9f5836a352ba46649e4ef56c7b2903242a1974a0 | 2021-05-20T23:25:11.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/goal_benchmark_roberta | 1 | null | transformers | 30,551 | Entry not found |
zharry29/intent_fb-en_id_xlmr | eac20f22c77590d63f14b9550ab1c7714c3bbaf0 | 2021-05-20T23:30:29.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-en_id_xlmr | 1 | null | transformers | 30,552 | Entry not found |
zharry29/intent_fb-es_id | 9e0edf6da5bbb1b1cd54ffc14f04f5915ad968a3 | 2020-09-16T20:14:32.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-es_id | 1 | null | transformers | 30,553 | Entry not found |
zharry29/intent_sgd_wh_id | e567c90d01f1a71f1e327bc87145917c5de18d6b | 2021-05-20T23:38:40.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_sgd_wh_id | 1 | null | transformers | 30,554 | Entry not found |
zharry29/intent_thwh | 385d612d9d712d369c7fbc53f0631a1b74e5a995 | 2020-09-16T20:44:55.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_thwh | 1 | null | transformers | 30,555 | Entry not found |
zharry29/order_benchmark_gpt | a2fbc6b71e5a8aaa14f8ec2e29ed0f6ad75ba73c | 2021-05-23T14:09:14.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | zharry29 | null | zharry29/order_benchmark_gpt | 1 | null | transformers | 30,556 | Entry not found |
zharry29/order_benchmark_roberta | bebaea5c5ba2784ccc6127f99c199288a3fcc5ea | 2021-05-20T23:51:12.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/order_benchmark_roberta | 1 | null | transformers | 30,557 | Entry not found |
zhichao158/wav2vec2-xls-r-common_voice-tr-ft | 3056c66e43b286734b51ea5a74980cc23daf9a4e | 2022-01-14T07:03:32.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | zhichao158 | null | zhichao158/wav2vec2-xls-r-common_voice-tr-ft | 1 | null | transformers | 30,558 | ---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3736
- Wer: 0.2930
- Cer: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.5462 | 13.51 | 500 | 0.4423 | 0.4807 | 0.1188 |
| 0.342 | 27.03 | 1000 | 0.3781 | 0.3954 | 0.0967 |
| 0.2272 | 40.54 | 1500 | 0.3816 | 0.3595 | 0.0893 |
| 0.1805 | 54.05 | 2000 | 0.3943 | 0.3487 | 0.0854 |
| 0.1318 | 67.57 | 2500 | 0.3818 | 0.3262 | 0.0801 |
| 0.1213 | 81.08 | 3000 | 0.3777 | 0.3113 | 0.0758 |
| 0.0639 | 94.59 | 3500 | 0.3788 | 0.2953 | 0.0716 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.8.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
zhizihuabai/ai12nlp | 2db9febf0fe86e77e25dfaaf6fe1a0bbf58a3554 | 2022-02-11T03:13:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhizihuabai | null | zhizihuabai/ai12nlp | 1 | null | transformers | 30,559 | Entry not found |
zhizihuabai/ai12one | 00bcca8573fbea10203537874672fa64b5eb56fa | 2022-02-11T10:18:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhizihuabai | null | zhizihuabai/ai12one | 1 | null | transformers | 30,560 | Entry not found |
zhizihuabai/ai12two | 99ba69ce3e306f0cbc4a405a3788081a75ba1c75 | 2022-02-12T03:04:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhizihuabai | null | zhizihuabai/ai12two | 1 | null | transformers | 30,561 | Entry not found |
zhuqing/RoBERTa-large-uncased-exp2-parent | 902edf36b08019260eab8c6ee6265b27ae153b9e | 2021-08-28T16:28:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/RoBERTa-large-uncased-exp2-parent | 1 | null | transformers | 30,562 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-first-no859-1 | 2d5e43d4c01f60307af2bb9509089629e7a99c0f | 2021-08-10T03:19:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-first-no859-1 | 1 | null | transformers | 30,563 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-first-no859-2 | 4a287f0d5fedbf9a2df42036957bcdb74ea3909d | 2021-08-10T03:27:05.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-first-no859-2 | 1 | null | transformers | 30,564 | Entry not found |
zhuqing/bert-base-uncased-netmums-parent-v2 | fa00a29f754f6b0b2a243efe19e31407cf52d71c | 2021-08-15T04:45:13.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-netmums-parent-v2 | 1 | null | transformers | 30,565 | Entry not found |
zhuqing/bert-base-uncased-reddit-lib-v2 | 0856105f6bc2ecb47354d9a106427aa39e659170 | 2021-08-03T06:36:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-reddit-lib-v2 | 1 | null | transformers | 30,566 | Entry not found |
zhuqing/bert-base-uncased-theme1-6000 | edb3a6ac6107896e73378dff7976e8ba441d69ed | 2021-07-31T17:19:02.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-theme1-6000 | 1 | null | transformers | 30,567 | Entry not found |
zhuqing/bert-base-uncased-theme1 | 0f603f88e5521690daf65a6650fbe746a76fc332 | 2021-07-17T08:56:55.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-theme1 | 1 | null | transformers | 30,568 | Entry not found |
zhuqing/bert-base-uncased-theme2-6000 | f65b4066c66d2e3ef92c39bfc33d0fbc6f309e3f | 2021-07-31T17:25:18.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-base-uncased-theme2-6000 | 1 | null | transformers | 30,569 | Entry not found |
zhuqing/distilbert-uncased-exp2-parent | 9dd0bebcd5b416bbed37a081c3ed2c5ffa37ec75 | 2021-08-29T07:07:38.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/distilbert-uncased-exp2-parent | 1 | null | transformers | 30,570 | Entry not found |
zhuqing/distilroberta-base-theme1-6000 | acc101e842bee9fe3627fefb072b1d0f01aa8c55 | 2021-07-31T16:21:20.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/distilroberta-base-theme1-6000 | 1 | null | transformers | 30,571 | Entry not found |
zhuqing/roberta-base-uncased-all-intersection | d7bd2f81d2972a7ad50fa9f8895908758b447b56 | 2021-08-23T13:10:19.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/roberta-base-uncased-all-intersection | 1 | null | transformers | 30,572 | Entry not found |
ziqingyang/XLMRobertaBaseForXNLI-en | b7436f5e3b36095e1b6e2259c58203ebfd2996e6 | 2022-01-26T02:03:42.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | ziqingyang | null | ziqingyang/XLMRobertaBaseForXNLI-en | 1 | null | transformers | 30,573 | ---
license: apache-2.0
---
|
zzecf/AI12 | 584eb753003bfc64fd9457315e7760cf196bb898 | 2022-02-10T12:39:01.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zzecf | null | zzecf/AI12 | 1 | null | transformers | 30,574 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-eu | b937fd1ee2d9470b5475882bc4ee982c32178b7c | 2022-02-25T09:58:23.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"eu",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-eu | 1 | null | transformers | 30,575 |
---
language:
- eu
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-eu
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 65.8
- type: accuracy
name: Dutch Test accuracy
value: 63.5
- type: accuracy
name: German Test accuracy
value: 66.3
- type: accuracy
name: Italian Test accuracy
value: 65.5
- type: accuracy
name: French Test accuracy
value: 61.2
- type: accuracy
name: Spanish Test accuracy
value: 62.0
- type: accuracy
name: Russian Test accuracy
value: 74.9
- type: accuracy
name: Swedish Test accuracy
value: 66.6
- type: accuracy
name: Norwegian Test accuracy
value: 61.8
- type: accuracy
name: Danish Test accuracy
value: 66.5
- type: accuracy
name: Low Saxon Test accuracy
value: 48.3
- type: accuracy
name: Akkadian Test accuracy
value: 40.9
- type: accuracy
name: Armenian Test accuracy
value: 80.8
- type: accuracy
name: Welsh Test accuracy
value: 53.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 65.1
- type: accuracy
name: Albanian Test accuracy
value: 66.9
- type: accuracy
name: Slovenian Test accuracy
value: 67.3
- type: accuracy
name: Guajajara Test accuracy
value: 32.0
- type: accuracy
name: Kurmanji Test accuracy
value: 66.2
- type: accuracy
name: Turkish Test accuracy
value: 75.7
- type: accuracy
name: Finnish Test accuracy
value: 79.2
- type: accuracy
name: Indonesian Test accuracy
value: 71.5
- type: accuracy
name: Ukrainian Test accuracy
value: 74.6
- type: accuracy
name: Polish Test accuracy
value: 73.8
- type: accuracy
name: Portuguese Test accuracy
value: 69.5
- type: accuracy
name: Kazakh Test accuracy
value: 84.0
- type: accuracy
name: Latin Test accuracy
value: 68.1
- type: accuracy
name: Old French Test accuracy
value: 45.0
- type: accuracy
name: Buryat Test accuracy
value: 66.6
- type: accuracy
name: Kaapor Test accuracy
value: 27.9
- type: accuracy
name: Korean Test accuracy
value: 65.4
- type: accuracy
name: Estonian Test accuracy
value: 79.4
- type: accuracy
name: Croatian Test accuracy
value: 74.6
- type: accuracy
name: Gothic Test accuracy
value: 30.8
- type: accuracy
name: Swiss German Test accuracy
value: 41.3
- type: accuracy
name: Assyrian Test accuracy
value: 15.9
- type: accuracy
name: North Sami Test accuracy
value: 41.9
- type: accuracy
name: Naija Test accuracy
value: 37.4
- type: accuracy
name: Latvian Test accuracy
value: 79.8
- type: accuracy
name: Chinese Test accuracy
value: 46.9
- type: accuracy
name: Tagalog Test accuracy
value: 56.6
- type: accuracy
name: Bambara Test accuracy
value: 29.8
- type: accuracy
name: Lithuanian Test accuracy
value: 80.9
- type: accuracy
name: Galician Test accuracy
value: 68.7
- type: accuracy
name: Vietnamese Test accuracy
value: 63.8
- type: accuracy
name: Greek Test accuracy
value: 65.3
- type: accuracy
name: Catalan Test accuracy
value: 58.0
- type: accuracy
name: Czech Test accuracy
value: 74.0
- type: accuracy
name: Erzya Test accuracy
value: 49.4
- type: accuracy
name: Bhojpuri Test accuracy
value: 53.4
- type: accuracy
name: Thai Test accuracy
value: 53.1
- type: accuracy
name: Marathi Test accuracy
value: 78.5
- type: accuracy
name: Basque Test accuracy
value: 95.7
- type: accuracy
name: Slovak Test accuracy
value: 75.9
- type: accuracy
name: Kiche Test accuracy
value: 35.3
- type: accuracy
name: Yoruba Test accuracy
value: 28.4
- type: accuracy
name: Warlpiri Test accuracy
value: 43.3
- type: accuracy
name: Tamil Test accuracy
value: 86.5
- type: accuracy
name: Maltese Test accuracy
value: 35.5
- type: accuracy
name: Ancient Greek Test accuracy
value: 59.2
- type: accuracy
name: Icelandic Test accuracy
value: 65.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 35.4
- type: accuracy
name: Urdu Test accuracy
value: 64.4
- type: accuracy
name: Romanian Test accuracy
value: 68.9
- type: accuracy
name: Persian Test accuracy
value: 63.9
- type: accuracy
name: Apurina Test accuracy
value: 39.4
- type: accuracy
name: Japanese Test accuracy
value: 39.2
- type: accuracy
name: Hungarian Test accuracy
value: 69.6
- type: accuracy
name: Hindi Test accuracy
value: 68.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 27.9
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.0
- type: accuracy
name: Faroese Test accuracy
value: 62.5
- type: accuracy
name: Sanskrit Test accuracy
value: 40.8
- type: accuracy
name: Livvi Test accuracy
value: 65.8
- type: accuracy
name: Arabic Test accuracy
value: 63.5
- type: accuracy
name: Wolof Test accuracy
value: 37.6
- type: accuracy
name: Bulgarian Test accuracy
value: 68.8
- type: accuracy
name: Akuntsu Test accuracy
value: 41.1
- type: accuracy
name: Makurap Test accuracy
value: 24.0
- type: accuracy
name: Kangri Test accuracy
value: 54.3
- type: accuracy
name: Breton Test accuracy
value: 52.9
- type: accuracy
name: Telugu Test accuracy
value: 82.4
- type: accuracy
name: Cantonese Test accuracy
value: 49.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 46.7
- type: accuracy
name: Karelian Test accuracy
value: 71.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 65.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 61.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 47.2
- type: accuracy
name: Irish Test accuracy
value: 53.7
- type: accuracy
name: Nayini Test accuracy
value: 41.0
- type: accuracy
name: Munduruku Test accuracy
value: 26.4
- type: accuracy
name: Manx Test accuracy
value: 33.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 45.5
- type: accuracy
name: Afrikaans Test accuracy
value: 61.2
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 44.8
- type: accuracy
name: Belarusian Test accuracy
value: 74.6
- type: accuracy
name: Serbian Test accuracy
value: 74.5
- type: accuracy
name: Moksha Test accuracy
value: 46.1
- type: accuracy
name: Western Armenian Test accuracy
value: 77.4
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 48.8
- type: accuracy
name: Khunsari Test accuracy
value: 39.2
- type: accuracy
name: Hebrew Test accuracy
value: 80.2
- type: accuracy
name: Uyghur Test accuracy
value: 75.3
- type: accuracy
name: Chukchi Test accuracy
value: 41.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Basque
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-eu")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-eu")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-fo | bf32dcefc6c49a30d6b9b14f4d0ca2f3cadbfb88 | 2022-02-25T09:58:28.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"fo",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-fo | 1 | null | transformers | 30,576 |
---
language:
- fo
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-fo
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 86.4
- type: accuracy
name: Dutch Test accuracy
value: 83.2
- type: accuracy
name: German Test accuracy
value: 83.2
- type: accuracy
name: Italian Test accuracy
value: 83.2
- type: accuracy
name: French Test accuracy
value: 80.6
- type: accuracy
name: Spanish Test accuracy
value: 83.4
- type: accuracy
name: Russian Test accuracy
value: 83.6
- type: accuracy
name: Swedish Test accuracy
value: 87.3
- type: accuracy
name: Norwegian Test accuracy
value: 83.9
- type: accuracy
name: Danish Test accuracy
value: 87.5
- type: accuracy
name: Low Saxon Test accuracy
value: 58.9
- type: accuracy
name: Akkadian Test accuracy
value: 32.9
- type: accuracy
name: Armenian Test accuracy
value: 81.2
- type: accuracy
name: Welsh Test accuracy
value: 66.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 75.4
- type: accuracy
name: Albanian Test accuracy
value: 72.5
- type: accuracy
name: Slovenian Test accuracy
value: 74.9
- type: accuracy
name: Guajajara Test accuracy
value: 34.2
- type: accuracy
name: Kurmanji Test accuracy
value: 72.8
- type: accuracy
name: Turkish Test accuracy
value: 74.0
- type: accuracy
name: Finnish Test accuracy
value: 81.9
- type: accuracy
name: Indonesian Test accuracy
value: 79.8
- type: accuracy
name: Ukrainian Test accuracy
value: 82.0
- type: accuracy
name: Polish Test accuracy
value: 82.1
- type: accuracy
name: Portuguese Test accuracy
value: 84.3
- type: accuracy
name: Kazakh Test accuracy
value: 78.3
- type: accuracy
name: Latin Test accuracy
value: 75.4
- type: accuracy
name: Old French Test accuracy
value: 63.5
- type: accuracy
name: Buryat Test accuracy
value: 60.8
- type: accuracy
name: Kaapor Test accuracy
value: 28.8
- type: accuracy
name: Korean Test accuracy
value: 61.5
- type: accuracy
name: Estonian Test accuracy
value: 83.9
- type: accuracy
name: Croatian Test accuracy
value: 82.2
- type: accuracy
name: Gothic Test accuracy
value: 34.2
- type: accuracy
name: Swiss German Test accuracy
value: 51.9
- type: accuracy
name: Assyrian Test accuracy
value: 21.6
- type: accuracy
name: North Sami Test accuracy
value: 46.5
- type: accuracy
name: Naija Test accuracy
value: 44.0
- type: accuracy
name: Latvian Test accuracy
value: 83.2
- type: accuracy
name: Chinese Test accuracy
value: 44.9
- type: accuracy
name: Tagalog Test accuracy
value: 76.1
- type: accuracy
name: Bambara Test accuracy
value: 30.5
- type: accuracy
name: Lithuanian Test accuracy
value: 83.2
- type: accuracy
name: Galician Test accuracy
value: 79.1
- type: accuracy
name: Vietnamese Test accuracy
value: 63.0
- type: accuracy
name: Greek Test accuracy
value: 77.4
- type: accuracy
name: Catalan Test accuracy
value: 81.4
- type: accuracy
name: Czech Test accuracy
value: 81.0
- type: accuracy
name: Erzya Test accuracy
value: 50.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 54.9
- type: accuracy
name: Thai Test accuracy
value: 60.7
- type: accuracy
name: Marathi Test accuracy
value: 81.0
- type: accuracy
name: Basque Test accuracy
value: 75.4
- type: accuracy
name: Slovak Test accuracy
value: 81.3
- type: accuracy
name: Kiche Test accuracy
value: 37.5
- type: accuracy
name: Yoruba Test accuracy
value: 33.7
- type: accuracy
name: Warlpiri Test accuracy
value: 41.3
- type: accuracy
name: Tamil Test accuracy
value: 75.2
- type: accuracy
name: Maltese Test accuracy
value: 32.9
- type: accuracy
name: Ancient Greek Test accuracy
value: 64.4
- type: accuracy
name: Icelandic Test accuracy
value: 86.5
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.7
- type: accuracy
name: Urdu Test accuracy
value: 69.2
- type: accuracy
name: Romanian Test accuracy
value: 80.3
- type: accuracy
name: Persian Test accuracy
value: 75.2
- type: accuracy
name: Apurina Test accuracy
value: 47.1
- type: accuracy
name: Japanese Test accuracy
value: 37.5
- type: accuracy
name: Hungarian Test accuracy
value: 73.6
- type: accuracy
name: Hindi Test accuracy
value: 70.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 29.1
- type: accuracy
name: Komi Permyak Test accuracy
value: 54.2
- type: accuracy
name: Faroese Test accuracy
value: 91.4
- type: accuracy
name: Sanskrit Test accuracy
value: 35.1
- type: accuracy
name: Livvi Test accuracy
value: 65.6
- type: accuracy
name: Arabic Test accuracy
value: 73.9
- type: accuracy
name: Wolof Test accuracy
value: 36.7
- type: accuracy
name: Bulgarian Test accuracy
value: 85.2
- type: accuracy
name: Akuntsu Test accuracy
value: 24.9
- type: accuracy
name: Makurap Test accuracy
value: 20.5
- type: accuracy
name: Kangri Test accuracy
value: 50.0
- type: accuracy
name: Breton Test accuracy
value: 64.4
- type: accuracy
name: Telugu Test accuracy
value: 82.8
- type: accuracy
name: Cantonese Test accuracy
value: 50.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 56.5
- type: accuracy
name: Karelian Test accuracy
value: 70.2
- type: accuracy
name: Upper Sorbian Test accuracy
value: 72.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 46.2
- type: accuracy
name: Irish Test accuracy
value: 63.1
- type: accuracy
name: Nayini Test accuracy
value: 47.4
- type: accuracy
name: Munduruku Test accuracy
value: 20.9
- type: accuracy
name: Manx Test accuracy
value: 40.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 42.6
- type: accuracy
name: Afrikaans Test accuracy
value: 84.3
- type: accuracy
name: Old Turkish Test accuracy
value: 38.0
- type: accuracy
name: Tupinamba Test accuracy
value: 40.9
- type: accuracy
name: Belarusian Test accuracy
value: 82.1
- type: accuracy
name: Serbian Test accuracy
value: 82.3
- type: accuracy
name: Moksha Test accuracy
value: 48.5
- type: accuracy
name: Western Armenian Test accuracy
value: 80.0
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 59.4
- type: accuracy
name: Khunsari Test accuracy
value: 44.6
- type: accuracy
name: Hebrew Test accuracy
value: 80.2
- type: accuracy
name: Uyghur Test accuracy
value: 72.8
- type: accuracy
name: Chukchi Test accuracy
value: 41.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Faroese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fo")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fo")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-gd | dc305bc9674ce76729e04825fa76475785c38082 | 2022-02-25T09:58:34.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"gd",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-gd | 1 | null | transformers | 30,577 |
---
language:
- gd
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-gd
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 75.0
- type: accuracy
name: Dutch Test accuracy
value: 77.8
- type: accuracy
name: German Test accuracy
value: 76.5
- type: accuracy
name: Italian Test accuracy
value: 70.8
- type: accuracy
name: French Test accuracy
value: 74.6
- type: accuracy
name: Spanish Test accuracy
value: 78.7
- type: accuracy
name: Russian Test accuracy
value: 79.2
- type: accuracy
name: Swedish Test accuracy
value: 78.9
- type: accuracy
name: Norwegian Test accuracy
value: 72.7
- type: accuracy
name: Danish Test accuracy
value: 78.0
- type: accuracy
name: Low Saxon Test accuracy
value: 51.0
- type: accuracy
name: Akkadian Test accuracy
value: 47.0
- type: accuracy
name: Armenian Test accuracy
value: 69.2
- type: accuracy
name: Welsh Test accuracy
value: 77.0
- type: accuracy
name: Old East Slavic Test accuracy
value: 70.1
- type: accuracy
name: Albanian Test accuracy
value: 76.1
- type: accuracy
name: Slovenian Test accuracy
value: 64.3
- type: accuracy
name: Guajajara Test accuracy
value: 42.6
- type: accuracy
name: Kurmanji Test accuracy
value: 73.6
- type: accuracy
name: Turkish Test accuracy
value: 71.7
- type: accuracy
name: Finnish Test accuracy
value: 74.4
- type: accuracy
name: Indonesian Test accuracy
value: 74.2
- type: accuracy
name: Ukrainian Test accuracy
value: 78.7
- type: accuracy
name: Polish Test accuracy
value: 81.4
- type: accuracy
name: Portuguese Test accuracy
value: 77.9
- type: accuracy
name: Kazakh Test accuracy
value: 73.3
- type: accuracy
name: Latin Test accuracy
value: 68.8
- type: accuracy
name: Old French Test accuracy
value: 48.7
- type: accuracy
name: Buryat Test accuracy
value: 58.4
- type: accuracy
name: Kaapor Test accuracy
value: 24.6
- type: accuracy
name: Korean Test accuracy
value: 58.9
- type: accuracy
name: Estonian Test accuracy
value: 76.8
- type: accuracy
name: Croatian Test accuracy
value: 74.0
- type: accuracy
name: Gothic Test accuracy
value: 29.4
- type: accuracy
name: Swiss German Test accuracy
value: 48.3
- type: accuracy
name: Assyrian Test accuracy
value: 20.1
- type: accuracy
name: North Sami Test accuracy
value: 44.3
- type: accuracy
name: Naija Test accuracy
value: 40.4
- type: accuracy
name: Latvian Test accuracy
value: 76.7
- type: accuracy
name: Chinese Test accuracy
value: 51.6
- type: accuracy
name: Tagalog Test accuracy
value: 68.3
- type: accuracy
name: Bambara Test accuracy
value: 30.3
- type: accuracy
name: Lithuanian Test accuracy
value: 77.2
- type: accuracy
name: Galician Test accuracy
value: 77.6
- type: accuracy
name: Vietnamese Test accuracy
value: 56.5
- type: accuracy
name: Greek Test accuracy
value: 79.1
- type: accuracy
name: Catalan Test accuracy
value: 74.5
- type: accuracy
name: Czech Test accuracy
value: 78.7
- type: accuracy
name: Erzya Test accuracy
value: 51.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 49.4
- type: accuracy
name: Thai Test accuracy
value: 57.1
- type: accuracy
name: Marathi Test accuracy
value: 72.4
- type: accuracy
name: Basque Test accuracy
value: 65.9
- type: accuracy
name: Slovak Test accuracy
value: 80.3
- type: accuracy
name: Kiche Test accuracy
value: 45.0
- type: accuracy
name: Yoruba Test accuracy
value: 32.5
- type: accuracy
name: Warlpiri Test accuracy
value: 43.7
- type: accuracy
name: Tamil Test accuracy
value: 76.7
- type: accuracy
name: Maltese Test accuracy
value: 34.9
- type: accuracy
name: Ancient Greek Test accuracy
value: 59.3
- type: accuracy
name: Icelandic Test accuracy
value: 73.1
- type: accuracy
name: Mbya Guarani Test accuracy
value: 34.5
- type: accuracy
name: Urdu Test accuracy
value: 56.0
- type: accuracy
name: Romanian Test accuracy
value: 74.4
- type: accuracy
name: Persian Test accuracy
value: 77.3
- type: accuracy
name: Apurina Test accuracy
value: 48.4
- type: accuracy
name: Japanese Test accuracy
value: 38.6
- type: accuracy
name: Hungarian Test accuracy
value: 78.5
- type: accuracy
name: Hindi Test accuracy
value: 60.5
- type: accuracy
name: Classical Chinese Test accuracy
value: 31.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 50.4
- type: accuracy
name: Faroese Test accuracy
value: 71.2
- type: accuracy
name: Sanskrit Test accuracy
value: 33.5
- type: accuracy
name: Livvi Test accuracy
value: 61.6
- type: accuracy
name: Arabic Test accuracy
value: 81.6
- type: accuracy
name: Wolof Test accuracy
value: 38.1
- type: accuracy
name: Bulgarian Test accuracy
value: 76.6
- type: accuracy
name: Akuntsu Test accuracy
value: 39.8
- type: accuracy
name: Makurap Test accuracy
value: 23.3
- type: accuracy
name: Kangri Test accuracy
value: 44.0
- type: accuracy
name: Breton Test accuracy
value: 60.9
- type: accuracy
name: Telugu Test accuracy
value: 74.5
- type: accuracy
name: Cantonese Test accuracy
value: 48.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 47.7
- type: accuracy
name: Karelian Test accuracy
value: 65.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 70.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.4
- type: accuracy
name: Komi Zyrian Test accuracy
value: 45.0
- type: accuracy
name: Irish Test accuracy
value: 76.6
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 34.0
- type: accuracy
name: Manx Test accuracy
value: 52.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 39.7
- type: accuracy
name: Afrikaans Test accuracy
value: 74.0
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 48.1
- type: accuracy
name: Belarusian Test accuracy
value: 79.7
- type: accuracy
name: Serbian Test accuracy
value: 72.7
- type: accuracy
name: Moksha Test accuracy
value: 49.3
- type: accuracy
name: Western Armenian Test accuracy
value: 68.1
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 93.3
- type: accuracy
name: Khunsari Test accuracy
value: 44.6
- type: accuracy
name: Hebrew Test accuracy
value: 86.5
- type: accuracy
name: Uyghur Test accuracy
value: 67.5
- type: accuracy
name: Chukchi Test accuracy
value: 38.8
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Scottish Gaelic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gd")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gd")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-got | b16275a5c0341834f0748a0b6f4703dd9127c6f9 | 2022-02-25T09:58:37.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"got",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-got | 1 | null | transformers | 30,578 |
---
language:
- got
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-got
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 47.9
- type: accuracy
name: Dutch Test accuracy
value: 50.2
- type: accuracy
name: German Test accuracy
value: 38.9
- type: accuracy
name: Italian Test accuracy
value: 46.8
- type: accuracy
name: French Test accuracy
value: 50.2
- type: accuracy
name: Spanish Test accuracy
value: 51.3
- type: accuracy
name: Russian Test accuracy
value: 52.4
- type: accuracy
name: Swedish Test accuracy
value: 51.5
- type: accuracy
name: Norwegian Test accuracy
value: 49.1
- type: accuracy
name: Danish Test accuracy
value: 50.8
- type: accuracy
name: Low Saxon Test accuracy
value: 32.8
- type: accuracy
name: Akkadian Test accuracy
value: 43.8
- type: accuracy
name: Armenian Test accuracy
value: 50.4
- type: accuracy
name: Welsh Test accuracy
value: 41.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 53.9
- type: accuracy
name: Albanian Test accuracy
value: 49.0
- type: accuracy
name: Slovenian Test accuracy
value: 45.3
- type: accuracy
name: Guajajara Test accuracy
value: 23.8
- type: accuracy
name: Kurmanji Test accuracy
value: 49.3
- type: accuracy
name: Turkish Test accuracy
value: 46.6
- type: accuracy
name: Finnish Test accuracy
value: 51.2
- type: accuracy
name: Indonesian Test accuracy
value: 55.4
- type: accuracy
name: Ukrainian Test accuracy
value: 50.0
- type: accuracy
name: Polish Test accuracy
value: 52.4
- type: accuracy
name: Portuguese Test accuracy
value: 50.4
- type: accuracy
name: Kazakh Test accuracy
value: 46.5
- type: accuracy
name: Latin Test accuracy
value: 49.1
- type: accuracy
name: Old French Test accuracy
value: 47.6
- type: accuracy
name: Buryat Test accuracy
value: 37.4
- type: accuracy
name: Kaapor Test accuracy
value: 33.8
- type: accuracy
name: Korean Test accuracy
value: 41.5
- type: accuracy
name: Estonian Test accuracy
value: 49.5
- type: accuracy
name: Croatian Test accuracy
value: 57.2
- type: accuracy
name: Gothic Test accuracy
value: 93.6
- type: accuracy
name: Swiss German Test accuracy
value: 25.1
- type: accuracy
name: Assyrian Test accuracy
value: 4.0
- type: accuracy
name: North Sami Test accuracy
value: 27.9
- type: accuracy
name: Naija Test accuracy
value: 29.2
- type: accuracy
name: Latvian Test accuracy
value: 51.5
- type: accuracy
name: Chinese Test accuracy
value: 16.4
- type: accuracy
name: Tagalog Test accuracy
value: 42.0
- type: accuracy
name: Bambara Test accuracy
value: 13.1
- type: accuracy
name: Lithuanian Test accuracy
value: 50.5
- type: accuracy
name: Galician Test accuracy
value: 49.2
- type: accuracy
name: Vietnamese Test accuracy
value: 47.1
- type: accuracy
name: Greek Test accuracy
value: 42.0
- type: accuracy
name: Catalan Test accuracy
value: 50.1
- type: accuracy
name: Czech Test accuracy
value: 54.3
- type: accuracy
name: Erzya Test accuracy
value: 22.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 38.8
- type: accuracy
name: Thai Test accuracy
value: 34.7
- type: accuracy
name: Marathi Test accuracy
value: 35.0
- type: accuracy
name: Basque Test accuracy
value: 45.9
- type: accuracy
name: Slovak Test accuracy
value: 55.3
- type: accuracy
name: Kiche Test accuracy
value: 23.3
- type: accuracy
name: Yoruba Test accuracy
value: 15.0
- type: accuracy
name: Warlpiri Test accuracy
value: 23.5
- type: accuracy
name: Tamil Test accuracy
value: 41.1
- type: accuracy
name: Maltese Test accuracy
value: 21.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 50.9
- type: accuracy
name: Icelandic Test accuracy
value: 50.3
- type: accuracy
name: Mbya Guarani Test accuracy
value: 14.8
- type: accuracy
name: Urdu Test accuracy
value: 41.4
- type: accuracy
name: Romanian Test accuracy
value: 50.1
- type: accuracy
name: Persian Test accuracy
value: 53.1
- type: accuracy
name: Apurina Test accuracy
value: 20.8
- type: accuracy
name: Japanese Test accuracy
value: 16.3
- type: accuracy
name: Hungarian Test accuracy
value: 42.3
- type: accuracy
name: Hindi Test accuracy
value: 45.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 19.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 23.4
- type: accuracy
name: Faroese Test accuracy
value: 48.9
- type: accuracy
name: Sanskrit Test accuracy
value: 32.4
- type: accuracy
name: Livvi Test accuracy
value: 38.5
- type: accuracy
name: Arabic Test accuracy
value: 49.6
- type: accuracy
name: Wolof Test accuracy
value: 28.4
- type: accuracy
name: Bulgarian Test accuracy
value: 55.6
- type: accuracy
name: Akuntsu Test accuracy
value: 25.2
- type: accuracy
name: Makurap Test accuracy
value: 18.5
- type: accuracy
name: Kangri Test accuracy
value: 34.2
- type: accuracy
name: Breton Test accuracy
value: 36.7
- type: accuracy
name: Telugu Test accuracy
value: 38.8
- type: accuracy
name: Cantonese Test accuracy
value: 17.1
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 50.2
- type: accuracy
name: Karelian Test accuracy
value: 41.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 42.7
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 38.9
- type: accuracy
name: Komi Zyrian Test accuracy
value: 21.1
- type: accuracy
name: Irish Test accuracy
value: 37.2
- type: accuracy
name: Nayini Test accuracy
value: 33.3
- type: accuracy
name: Munduruku Test accuracy
value: 26.6
- type: accuracy
name: Manx Test accuracy
value: 17.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 19.9
- type: accuracy
name: Afrikaans Test accuracy
value: 45.9
- type: accuracy
name: Old Turkish Test accuracy
value: 2.7
- type: accuracy
name: Tupinamba Test accuracy
value: 23.4
- type: accuracy
name: Belarusian Test accuracy
value: 53.0
- type: accuracy
name: Serbian Test accuracy
value: 57.4
- type: accuracy
name: Moksha Test accuracy
value: 24.5
- type: accuracy
name: Western Armenian Test accuracy
value: 47.2
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 36.7
- type: accuracy
name: Khunsari Test accuracy
value: 28.4
- type: accuracy
name: Hebrew Test accuracy
value: 44.8
- type: accuracy
name: Uyghur Test accuracy
value: 48.6
- type: accuracy
name: Chukchi Test accuracy
value: 21.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Gothic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-got")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-got")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-grc | 4ab6d1a5a67008b56dcf449292544f39682cbaea | 2022-02-25T09:58:39.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"grc",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-grc | 1 | null | transformers | 30,579 |
---
language:
- grc
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-grc
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 58.3
- type: accuracy
name: Dutch Test accuracy
value: 57.1
- type: accuracy
name: German Test accuracy
value: 61.3
- type: accuracy
name: Italian Test accuracy
value: 56.6
- type: accuracy
name: French Test accuracy
value: 57.3
- type: accuracy
name: Spanish Test accuracy
value: 54.5
- type: accuracy
name: Russian Test accuracy
value: 71.1
- type: accuracy
name: Swedish Test accuracy
value: 62.9
- type: accuracy
name: Norwegian Test accuracy
value: 59.9
- type: accuracy
name: Danish Test accuracy
value: 61.6
- type: accuracy
name: Low Saxon Test accuracy
value: 45.3
- type: accuracy
name: Akkadian Test accuracy
value: 38.9
- type: accuracy
name: Armenian Test accuracy
value: 69.4
- type: accuracy
name: Welsh Test accuracy
value: 57.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 68.0
- type: accuracy
name: Albanian Test accuracy
value: 63.3
- type: accuracy
name: Slovenian Test accuracy
value: 58.2
- type: accuracy
name: Guajajara Test accuracy
value: 26.5
- type: accuracy
name: Kurmanji Test accuracy
value: 62.0
- type: accuracy
name: Turkish Test accuracy
value: 66.5
- type: accuracy
name: Finnish Test accuracy
value: 70.3
- type: accuracy
name: Indonesian Test accuracy
value: 59.7
- type: accuracy
name: Ukrainian Test accuracy
value: 72.6
- type: accuracy
name: Polish Test accuracy
value: 70.3
- type: accuracy
name: Portuguese Test accuracy
value: 59.7
- type: accuracy
name: Kazakh Test accuracy
value: 71.0
- type: accuracy
name: Latin Test accuracy
value: 68.8
- type: accuracy
name: Old French Test accuracy
value: 49.4
- type: accuracy
name: Buryat Test accuracy
value: 56.4
- type: accuracy
name: Kaapor Test accuracy
value: 27.9
- type: accuracy
name: Korean Test accuracy
value: 55.5
- type: accuracy
name: Estonian Test accuracy
value: 70.0
- type: accuracy
name: Croatian Test accuracy
value: 64.8
- type: accuracy
name: Gothic Test accuracy
value: 33.9
- type: accuracy
name: Swiss German Test accuracy
value: 47.2
- type: accuracy
name: Assyrian Test accuracy
value: 29.1
- type: accuracy
name: North Sami Test accuracy
value: 37.4
- type: accuracy
name: Naija Test accuracy
value: 37.2
- type: accuracy
name: Latvian Test accuracy
value: 74.5
- type: accuracy
name: Chinese Test accuracy
value: 56.6
- type: accuracy
name: Tagalog Test accuracy
value: 57.6
- type: accuracy
name: Bambara Test accuracy
value: 28.6
- type: accuracy
name: Lithuanian Test accuracy
value: 77.4
- type: accuracy
name: Galician Test accuracy
value: 61.6
- type: accuracy
name: Vietnamese Test accuracy
value: 63.7
- type: accuracy
name: Greek Test accuracy
value: 63.3
- type: accuracy
name: Catalan Test accuracy
value: 54.2
- type: accuracy
name: Czech Test accuracy
value: 70.1
- type: accuracy
name: Erzya Test accuracy
value: 46.7
- type: accuracy
name: Bhojpuri Test accuracy
value: 43.7
- type: accuracy
name: Thai Test accuracy
value: 61.1
- type: accuracy
name: Marathi Test accuracy
value: 75.5
- type: accuracy
name: Basque Test accuracy
value: 63.3
- type: accuracy
name: Slovak Test accuracy
value: 67.3
- type: accuracy
name: Kiche Test accuracy
value: 29.7
- type: accuracy
name: Yoruba Test accuracy
value: 30.4
- type: accuracy
name: Warlpiri Test accuracy
value: 49.4
- type: accuracy
name: Tamil Test accuracy
value: 68.7
- type: accuracy
name: Maltese Test accuracy
value: 29.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 89.6
- type: accuracy
name: Icelandic Test accuracy
value: 63.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 36.4
- type: accuracy
name: Urdu Test accuracy
value: 44.8
- type: accuracy
name: Romanian Test accuracy
value: 66.3
- type: accuracy
name: Persian Test accuracy
value: 64.4
- type: accuracy
name: Apurina Test accuracy
value: 41.7
- type: accuracy
name: Japanese Test accuracy
value: 44.3
- type: accuracy
name: Hungarian Test accuracy
value: 61.4
- type: accuracy
name: Hindi Test accuracy
value: 47.8
- type: accuracy
name: Classical Chinese Test accuracy
value: 48.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 45.9
- type: accuracy
name: Faroese Test accuracy
value: 59.2
- type: accuracy
name: Sanskrit Test accuracy
value: 42.9
- type: accuracy
name: Livvi Test accuracy
value: 61.8
- type: accuracy
name: Arabic Test accuracy
value: 65.3
- type: accuracy
name: Wolof Test accuracy
value: 27.8
- type: accuracy
name: Bulgarian Test accuracy
value: 64.9
- type: accuracy
name: Akuntsu Test accuracy
value: 30.8
- type: accuracy
name: Makurap Test accuracy
value: 18.5
- type: accuracy
name: Kangri Test accuracy
value: 45.9
- type: accuracy
name: Breton Test accuracy
value: 47.1
- type: accuracy
name: Telugu Test accuracy
value: 75.3
- type: accuracy
name: Cantonese Test accuracy
value: 60.2
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 58.8
- type: accuracy
name: Karelian Test accuracy
value: 64.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 62.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 61.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 45.4
- type: accuracy
name: Irish Test accuracy
value: 52.4
- type: accuracy
name: Nayini Test accuracy
value: 51.3
- type: accuracy
name: Munduruku Test accuracy
value: 21.6
- type: accuracy
name: Manx Test accuracy
value: 27.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 44.7
- type: accuracy
name: Afrikaans Test accuracy
value: 58.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 44.4
- type: accuracy
name: Belarusian Test accuracy
value: 75.3
- type: accuracy
name: Serbian Test accuracy
value: 63.3
- type: accuracy
name: Moksha Test accuracy
value: 46.1
- type: accuracy
name: Western Armenian Test accuracy
value: 67.1
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 49.2
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 72.9
- type: accuracy
name: Uyghur Test accuracy
value: 72.7
- type: accuracy
name: Chukchi Test accuracy
value: 40.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Ancient Greek
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-grc")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-grc")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-hu | 77d78b6bda952043e844e8db14d2a5b1f491a21f | 2022-02-25T09:58:45.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"hu",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-hu | 1 | null | transformers | 30,580 |
---
language:
- hu
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-hu
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 77.0
- type: accuracy
name: Dutch Test accuracy
value: 77.0
- type: accuracy
name: German Test accuracy
value: 77.0
- type: accuracy
name: Italian Test accuracy
value: 77.6
- type: accuracy
name: French Test accuracy
value: 75.9
- type: accuracy
name: Spanish Test accuracy
value: 76.1
- type: accuracy
name: Russian Test accuracy
value: 78.7
- type: accuracy
name: Swedish Test accuracy
value: 78.9
- type: accuracy
name: Norwegian Test accuracy
value: 74.6
- type: accuracy
name: Danish Test accuracy
value: 77.7
- type: accuracy
name: Low Saxon Test accuracy
value: 55.5
- type: accuracy
name: Akkadian Test accuracy
value: 31.1
- type: accuracy
name: Armenian Test accuracy
value: 85.7
- type: accuracy
name: Welsh Test accuracy
value: 54.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 65.6
- type: accuracy
name: Albanian Test accuracy
value: 80.0
- type: accuracy
name: Slovenian Test accuracy
value: 71.9
- type: accuracy
name: Guajajara Test accuracy
value: 23.6
- type: accuracy
name: Kurmanji Test accuracy
value: 70.0
- type: accuracy
name: Turkish Test accuracy
value: 80.4
- type: accuracy
name: Finnish Test accuracy
value: 85.1
- type: accuracy
name: Indonesian Test accuracy
value: 76.6
- type: accuracy
name: Ukrainian Test accuracy
value: 78.5
- type: accuracy
name: Polish Test accuracy
value: 77.9
- type: accuracy
name: Portuguese Test accuracy
value: 79.1
- type: accuracy
name: Kazakh Test accuracy
value: 80.9
- type: accuracy
name: Latin Test accuracy
value: 71.3
- type: accuracy
name: Old French Test accuracy
value: 55.1
- type: accuracy
name: Buryat Test accuracy
value: 62.2
- type: accuracy
name: Kaapor Test accuracy
value: 22.1
- type: accuracy
name: Korean Test accuracy
value: 59.1
- type: accuracy
name: Estonian Test accuracy
value: 87.6
- type: accuracy
name: Croatian Test accuracy
value: 78.9
- type: accuracy
name: Gothic Test accuracy
value: 25.6
- type: accuracy
name: Swiss German Test accuracy
value: 45.7
- type: accuracy
name: Assyrian Test accuracy
value: 16.3
- type: accuracy
name: North Sami Test accuracy
value: 44.7
- type: accuracy
name: Naija Test accuracy
value: 39.3
- type: accuracy
name: Latvian Test accuracy
value: 81.8
- type: accuracy
name: Chinese Test accuracy
value: 40.9
- type: accuracy
name: Tagalog Test accuracy
value: 63.9
- type: accuracy
name: Bambara Test accuracy
value: 27.0
- type: accuracy
name: Lithuanian Test accuracy
value: 79.7
- type: accuracy
name: Galician Test accuracy
value: 77.4
- type: accuracy
name: Vietnamese Test accuracy
value: 59.9
- type: accuracy
name: Greek Test accuracy
value: 79.2
- type: accuracy
name: Catalan Test accuracy
value: 76.1
- type: accuracy
name: Czech Test accuracy
value: 79.0
- type: accuracy
name: Erzya Test accuracy
value: 50.9
- type: accuracy
name: Bhojpuri Test accuracy
value: 53.1
- type: accuracy
name: Thai Test accuracy
value: 45.2
- type: accuracy
name: Marathi Test accuracy
value: 87.1
- type: accuracy
name: Basque Test accuracy
value: 73.7
- type: accuracy
name: Slovak Test accuracy
value: 78.7
- type: accuracy
name: Kiche Test accuracy
value: 33.5
- type: accuracy
name: Yoruba Test accuracy
value: 28.0
- type: accuracy
name: Warlpiri Test accuracy
value: 33.2
- type: accuracy
name: Tamil Test accuracy
value: 82.7
- type: accuracy
name: Maltese Test accuracy
value: 29.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 55.9
- type: accuracy
name: Icelandic Test accuracy
value: 73.5
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.3
- type: accuracy
name: Urdu Test accuracy
value: 69.4
- type: accuracy
name: Romanian Test accuracy
value: 72.4
- type: accuracy
name: Persian Test accuracy
value: 69.2
- type: accuracy
name: Apurina Test accuracy
value: 38.4
- type: accuracy
name: Japanese Test accuracy
value: 30.2
- type: accuracy
name: Hungarian Test accuracy
value: 97.3
- type: accuracy
name: Hindi Test accuracy
value: 73.9
- type: accuracy
name: Classical Chinese Test accuracy
value: 32.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 53.6
- type: accuracy
name: Faroese Test accuracy
value: 67.4
- type: accuracy
name: Sanskrit Test accuracy
value: 40.9
- type: accuracy
name: Livvi Test accuracy
value: 69.7
- type: accuracy
name: Arabic Test accuracy
value: 69.2
- type: accuracy
name: Wolof Test accuracy
value: 34.7
- type: accuracy
name: Bulgarian Test accuracy
value: 74.3
- type: accuracy
name: Akuntsu Test accuracy
value: 29.6
- type: accuracy
name: Makurap Test accuracy
value: 18.5
- type: accuracy
name: Kangri Test accuracy
value: 51.8
- type: accuracy
name: Breton Test accuracy
value: 59.7
- type: accuracy
name: Telugu Test accuracy
value: 82.1
- type: accuracy
name: Cantonese Test accuracy
value: 48.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 48.9
- type: accuracy
name: Karelian Test accuracy
value: 74.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 69.7
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 61.7
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.1
- type: accuracy
name: Irish Test accuracy
value: 59.8
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 23.0
- type: accuracy
name: Manx Test accuracy
value: 33.5
- type: accuracy
name: Skolt Sami Test accuracy
value: 50.0
- type: accuracy
name: Afrikaans Test accuracy
value: 73.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 36.6
- type: accuracy
name: Belarusian Test accuracy
value: 77.3
- type: accuracy
name: Serbian Test accuracy
value: 80.1
- type: accuracy
name: Moksha Test accuracy
value: 47.6
- type: accuracy
name: Western Armenian Test accuracy
value: 75.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 54.4
- type: accuracy
name: Khunsari Test accuracy
value: 37.8
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 71.3
- type: accuracy
name: Chukchi Test accuracy
value: 40.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Hungarian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hu")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hu")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-is | 48dad98f1cbe6cadec41782455abd2b481d9e2f9 | 2022-02-25T09:58:51.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"is",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-is | 1 | null | transformers | 30,581 |
---
language:
- is
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-is
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 88.4
- type: accuracy
name: Dutch Test accuracy
value: 86.9
- type: accuracy
name: German Test accuracy
value: 82.7
- type: accuracy
name: Italian Test accuracy
value: 84.6
- type: accuracy
name: French Test accuracy
value: 83.6
- type: accuracy
name: Spanish Test accuracy
value: 83.6
- type: accuracy
name: Russian Test accuracy
value: 87.6
- type: accuracy
name: Swedish Test accuracy
value: 89.9
- type: accuracy
name: Norwegian Test accuracy
value: 86.4
- type: accuracy
name: Danish Test accuracy
value: 89.6
- type: accuracy
name: Low Saxon Test accuracy
value: 57.6
- type: accuracy
name: Akkadian Test accuracy
value: 30.5
- type: accuracy
name: Armenian Test accuracy
value: 86.6
- type: accuracy
name: Welsh Test accuracy
value: 66.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.3
- type: accuracy
name: Albanian Test accuracy
value: 80.8
- type: accuracy
name: Slovenian Test accuracy
value: 76.8
- type: accuracy
name: Guajajara Test accuracy
value: 31.8
- type: accuracy
name: Kurmanji Test accuracy
value: 78.6
- type: accuracy
name: Turkish Test accuracy
value: 77.3
- type: accuracy
name: Finnish Test accuracy
value: 84.8
- type: accuracy
name: Indonesian Test accuracy
value: 84.4
- type: accuracy
name: Ukrainian Test accuracy
value: 85.9
- type: accuracy
name: Polish Test accuracy
value: 84.2
- type: accuracy
name: Portuguese Test accuracy
value: 86.6
- type: accuracy
name: Kazakh Test accuracy
value: 81.8
- type: accuracy
name: Latin Test accuracy
value: 75.8
- type: accuracy
name: Old French Test accuracy
value: 58.6
- type: accuracy
name: Buryat Test accuracy
value: 63.1
- type: accuracy
name: Kaapor Test accuracy
value: 18.3
- type: accuracy
name: Korean Test accuracy
value: 64.3
- type: accuracy
name: Estonian Test accuracy
value: 86.7
- type: accuracy
name: Croatian Test accuracy
value: 86.0
- type: accuracy
name: Gothic Test accuracy
value: 26.6
- type: accuracy
name: Swiss German Test accuracy
value: 45.6
- type: accuracy
name: Assyrian Test accuracy
value: 15.5
- type: accuracy
name: North Sami Test accuracy
value: 43.9
- type: accuracy
name: Naija Test accuracy
value: 46.6
- type: accuracy
name: Latvian Test accuracy
value: 85.3
- type: accuracy
name: Chinese Test accuracy
value: 60.4
- type: accuracy
name: Tagalog Test accuracy
value: 80.0
- type: accuracy
name: Bambara Test accuracy
value: 32.5
- type: accuracy
name: Lithuanian Test accuracy
value: 85.9
- type: accuracy
name: Galician Test accuracy
value: 80.7
- type: accuracy
name: Vietnamese Test accuracy
value: 64.1
- type: accuracy
name: Greek Test accuracy
value: 80.5
- type: accuracy
name: Catalan Test accuracy
value: 82.7
- type: accuracy
name: Czech Test accuracy
value: 84.6
- type: accuracy
name: Erzya Test accuracy
value: 52.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 59.0
- type: accuracy
name: Thai Test accuracy
value: 68.2
- type: accuracy
name: Marathi Test accuracy
value: 87.1
- type: accuracy
name: Basque Test accuracy
value: 79.5
- type: accuracy
name: Slovak Test accuracy
value: 86.0
- type: accuracy
name: Kiche Test accuracy
value: 42.2
- type: accuracy
name: Yoruba Test accuracy
value: 34.3
- type: accuracy
name: Warlpiri Test accuracy
value: 43.7
- type: accuracy
name: Tamil Test accuracy
value: 83.9
- type: accuracy
name: Maltese Test accuracy
value: 27.5
- type: accuracy
name: Ancient Greek Test accuracy
value: 64.0
- type: accuracy
name: Icelandic Test accuracy
value: 95.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.9
- type: accuracy
name: Urdu Test accuracy
value: 72.7
- type: accuracy
name: Romanian Test accuracy
value: 82.0
- type: accuracy
name: Persian Test accuracy
value: 78.3
- type: accuracy
name: Apurina Test accuracy
value: 47.9
- type: accuracy
name: Japanese Test accuracy
value: 44.0
- type: accuracy
name: Hungarian Test accuracy
value: 77.2
- type: accuracy
name: Hindi Test accuracy
value: 77.4
- type: accuracy
name: Classical Chinese Test accuracy
value: 46.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.7
- type: accuracy
name: Faroese Test accuracy
value: 83.9
- type: accuracy
name: Sanskrit Test accuracy
value: 37.4
- type: accuracy
name: Livvi Test accuracy
value: 66.8
- type: accuracy
name: Arabic Test accuracy
value: 79.2
- type: accuracy
name: Wolof Test accuracy
value: 39.9
- type: accuracy
name: Bulgarian Test accuracy
value: 87.7
- type: accuracy
name: Akuntsu Test accuracy
value: 37.0
- type: accuracy
name: Makurap Test accuracy
value: 24.7
- type: accuracy
name: Kangri Test accuracy
value: 50.2
- type: accuracy
name: Breton Test accuracy
value: 61.8
- type: accuracy
name: Telugu Test accuracy
value: 84.5
- type: accuracy
name: Cantonese Test accuracy
value: 60.6
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 53.9
- type: accuracy
name: Karelian Test accuracy
value: 74.0
- type: accuracy
name: Upper Sorbian Test accuracy
value: 75.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 70.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 47.1
- type: accuracy
name: Irish Test accuracy
value: 66.8
- type: accuracy
name: Nayini Test accuracy
value: 43.6
- type: accuracy
name: Munduruku Test accuracy
value: 28.3
- type: accuracy
name: Manx Test accuracy
value: 48.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 39.6
- type: accuracy
name: Afrikaans Test accuracy
value: 87.4
- type: accuracy
name: Old Turkish Test accuracy
value: 38.9
- type: accuracy
name: Tupinamba Test accuracy
value: 37.6
- type: accuracy
name: Belarusian Test accuracy
value: 86.8
- type: accuracy
name: Serbian Test accuracy
value: 87.2
- type: accuracy
name: Moksha Test accuracy
value: 49.8
- type: accuracy
name: Western Armenian Test accuracy
value: 79.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.8
- type: accuracy
name: Khunsari Test accuracy
value: 52.7
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 76.9
- type: accuracy
name: Chukchi Test accuracy
value: 37.7
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Icelandic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-is")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-is")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-lzh | 3747fe9c0019f2104de495030664f2f59debd43b | 2022-02-25T09:59:02.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"lzh",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-lzh | 1 | null | transformers | 30,582 |
---
language:
- lzh
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-lzh
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 33.6
- type: accuracy
name: Dutch Test accuracy
value: 30.9
- type: accuracy
name: German Test accuracy
value: 31.1
- type: accuracy
name: Italian Test accuracy
value: 31.1
- type: accuracy
name: French Test accuracy
value: 30.3
- type: accuracy
name: Spanish Test accuracy
value: 30.6
- type: accuracy
name: Russian Test accuracy
value: 37.1
- type: accuracy
name: Swedish Test accuracy
value: 35.6
- type: accuracy
name: Norwegian Test accuracy
value: 32.7
- type: accuracy
name: Danish Test accuracy
value: 35.0
- type: accuracy
name: Low Saxon Test accuracy
value: 19.0
- type: accuracy
name: Akkadian Test accuracy
value: 25.9
- type: accuracy
name: Armenian Test accuracy
value: 40.9
- type: accuracy
name: Welsh Test accuracy
value: 27.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 36.4
- type: accuracy
name: Albanian Test accuracy
value: 31.6
- type: accuracy
name: Slovenian Test accuracy
value: 31.1
- type: accuracy
name: Guajajara Test accuracy
value: 13.9
- type: accuracy
name: Kurmanji Test accuracy
value: 36.5
- type: accuracy
name: Turkish Test accuracy
value: 42.7
- type: accuracy
name: Finnish Test accuracy
value: 45.0
- type: accuracy
name: Indonesian Test accuracy
value: 40.6
- type: accuracy
name: Ukrainian Test accuracy
value: 36.0
- type: accuracy
name: Polish Test accuracy
value: 35.3
- type: accuracy
name: Portuguese Test accuracy
value: 34.8
- type: accuracy
name: Kazakh Test accuracy
value: 45.4
- type: accuracy
name: Latin Test accuracy
value: 37.9
- type: accuracy
name: Old French Test accuracy
value: 33.4
- type: accuracy
name: Buryat Test accuracy
value: 27.2
- type: accuracy
name: Kaapor Test accuracy
value: 19.6
- type: accuracy
name: Korean Test accuracy
value: 44.8
- type: accuracy
name: Estonian Test accuracy
value: 41.4
- type: accuracy
name: Croatian Test accuracy
value: 34.2
- type: accuracy
name: Gothic Test accuracy
value: 12.3
- type: accuracy
name: Swiss German Test accuracy
value: 18.1
- type: accuracy
name: Assyrian Test accuracy
value: 3.5
- type: accuracy
name: North Sami Test accuracy
value: 8.9
- type: accuracy
name: Naija Test accuracy
value: 25.4
- type: accuracy
name: Latvian Test accuracy
value: 45.0
- type: accuracy
name: Chinese Test accuracy
value: 53.2
- type: accuracy
name: Tagalog Test accuracy
value: 34.0
- type: accuracy
name: Bambara Test accuracy
value: 13.9
- type: accuracy
name: Lithuanian Test accuracy
value: 44.0
- type: accuracy
name: Galician Test accuracy
value: 29.0
- type: accuracy
name: Vietnamese Test accuracy
value: 40.9
- type: accuracy
name: Greek Test accuracy
value: 31.3
- type: accuracy
name: Catalan Test accuracy
value: 29.6
- type: accuracy
name: Czech Test accuracy
value: 35.4
- type: accuracy
name: Erzya Test accuracy
value: 9.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 22.9
- type: accuracy
name: Thai Test accuracy
value: 51.6
- type: accuracy
name: Marathi Test accuracy
value: 36.8
- type: accuracy
name: Basque Test accuracy
value: 42.1
- type: accuracy
name: Slovak Test accuracy
value: 36.3
- type: accuracy
name: Kiche Test accuracy
value: 11.9
- type: accuracy
name: Yoruba Test accuracy
value: 10.9
- type: accuracy
name: Warlpiri Test accuracy
value: 15.0
- type: accuracy
name: Tamil Test accuracy
value: 53.4
- type: accuracy
name: Maltese Test accuracy
value: 9.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 31.9
- type: accuracy
name: Icelandic Test accuracy
value: 38.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 7.1
- type: accuracy
name: Urdu Test accuracy
value: 33.4
- type: accuracy
name: Romanian Test accuracy
value: 33.5
- type: accuracy
name: Persian Test accuracy
value: 35.2
- type: accuracy
name: Apurina Test accuracy
value: 11.9
- type: accuracy
name: Japanese Test accuracy
value: 39.6
- type: accuracy
name: Hungarian Test accuracy
value: 37.2
- type: accuracy
name: Hindi Test accuracy
value: 33.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 88.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 11.3
- type: accuracy
name: Faroese Test accuracy
value: 30.3
- type: accuracy
name: Sanskrit Test accuracy
value: 20.6
- type: accuracy
name: Livvi Test accuracy
value: 29.1
- type: accuracy
name: Arabic Test accuracy
value: 34.9
- type: accuracy
name: Wolof Test accuracy
value: 17.0
- type: accuracy
name: Bulgarian Test accuracy
value: 34.3
- type: accuracy
name: Akuntsu Test accuracy
value: 19.3
- type: accuracy
name: Makurap Test accuracy
value: 21.2
- type: accuracy
name: Kangri Test accuracy
value: 19.8
- type: accuracy
name: Breton Test accuracy
value: 27.4
- type: accuracy
name: Telugu Test accuracy
value: 49.4
- type: accuracy
name: Cantonese Test accuracy
value: 53.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 27.9
- type: accuracy
name: Karelian Test accuracy
value: 32.8
- type: accuracy
name: Upper Sorbian Test accuracy
value: 22.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 29.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 9.7
- type: accuracy
name: Irish Test accuracy
value: 29.5
- type: accuracy
name: Nayini Test accuracy
value: 32.1
- type: accuracy
name: Munduruku Test accuracy
value: 14.4
- type: accuracy
name: Manx Test accuracy
value: 16.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 5.3
- type: accuracy
name: Afrikaans Test accuracy
value: 31.8
- type: accuracy
name: Old Turkish Test accuracy
value: 13.6
- type: accuracy
name: Tupinamba Test accuracy
value: 9.4
- type: accuracy
name: Belarusian Test accuracy
value: 36.7
- type: accuracy
name: Serbian Test accuracy
value: 33.9
- type: accuracy
name: Moksha Test accuracy
value: 10.4
- type: accuracy
name: Western Armenian Test accuracy
value: 34.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 29.2
- type: accuracy
name: Khunsari Test accuracy
value: 23.0
- type: accuracy
name: Hebrew Test accuracy
value: 44.8
- type: accuracy
name: Uyghur Test accuracy
value: 44.6
- type: accuracy
name: Chukchi Test accuracy
value: 7.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Classical Chinese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-mr | 7c1da08f23db1e666ede432aea8bae7befc7bb06 | 2022-02-25T09:59:04.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"mr",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-mr | 1 | null | transformers | 30,583 |
---
language:
- mr
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-mr
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 67.4
- type: accuracy
name: Dutch Test accuracy
value: 61.5
- type: accuracy
name: German Test accuracy
value: 66.9
- type: accuracy
name: Italian Test accuracy
value: 64.8
- type: accuracy
name: French Test accuracy
value: 61.7
- type: accuracy
name: Spanish Test accuracy
value: 60.1
- type: accuracy
name: Russian Test accuracy
value: 68.1
- type: accuracy
name: Swedish Test accuracy
value: 68.4
- type: accuracy
name: Norwegian Test accuracy
value: 64.1
- type: accuracy
name: Danish Test accuracy
value: 66.4
- type: accuracy
name: Low Saxon Test accuracy
value: 51.7
- type: accuracy
name: Akkadian Test accuracy
value: 23.7
- type: accuracy
name: Armenian Test accuracy
value: 74.4
- type: accuracy
name: Welsh Test accuracy
value: 50.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 57.8
- type: accuracy
name: Albanian Test accuracy
value: 61.9
- type: accuracy
name: Slovenian Test accuracy
value: 60.1
- type: accuracy
name: Guajajara Test accuracy
value: 20.5
- type: accuracy
name: Kurmanji Test accuracy
value: 60.0
- type: accuracy
name: Turkish Test accuracy
value: 71.8
- type: accuracy
name: Finnish Test accuracy
value: 74.5
- type: accuracy
name: Indonesian Test accuracy
value: 59.0
- type: accuracy
name: Ukrainian Test accuracy
value: 67.1
- type: accuracy
name: Polish Test accuracy
value: 65.0
- type: accuracy
name: Portuguese Test accuracy
value: 66.7
- type: accuracy
name: Kazakh Test accuracy
value: 73.8
- type: accuracy
name: Latin Test accuracy
value: 66.2
- type: accuracy
name: Old French Test accuracy
value: 48.6
- type: accuracy
name: Buryat Test accuracy
value: 57.0
- type: accuracy
name: Kaapor Test accuracy
value: 19.2
- type: accuracy
name: Korean Test accuracy
value: 59.7
- type: accuracy
name: Estonian Test accuracy
value: 75.4
- type: accuracy
name: Croatian Test accuracy
value: 63.8
- type: accuracy
name: Gothic Test accuracy
value: 20.0
- type: accuracy
name: Swiss German Test accuracy
value: 46.8
- type: accuracy
name: Assyrian Test accuracy
value: 16.1
- type: accuracy
name: North Sami Test accuracy
value: 37.1
- type: accuracy
name: Naija Test accuracy
value: 37.9
- type: accuracy
name: Latvian Test accuracy
value: 75.6
- type: accuracy
name: Chinese Test accuracy
value: 49.7
- type: accuracy
name: Tagalog Test accuracy
value: 55.1
- type: accuracy
name: Bambara Test accuracy
value: 28.9
- type: accuracy
name: Lithuanian Test accuracy
value: 75.9
- type: accuracy
name: Galician Test accuracy
value: 65.5
- type: accuracy
name: Vietnamese Test accuracy
value: 61.0
- type: accuracy
name: Greek Test accuracy
value: 70.4
- type: accuracy
name: Catalan Test accuracy
value: 57.9
- type: accuracy
name: Czech Test accuracy
value: 64.9
- type: accuracy
name: Erzya Test accuracy
value: 47.7
- type: accuracy
name: Bhojpuri Test accuracy
value: 41.9
- type: accuracy
name: Thai Test accuracy
value: 44.1
- type: accuracy
name: Marathi Test accuracy
value: 89.0
- type: accuracy
name: Basque Test accuracy
value: 71.8
- type: accuracy
name: Slovak Test accuracy
value: 61.3
- type: accuracy
name: Kiche Test accuracy
value: 25.7
- type: accuracy
name: Yoruba Test accuracy
value: 22.8
- type: accuracy
name: Warlpiri Test accuracy
value: 42.9
- type: accuracy
name: Tamil Test accuracy
value: 73.5
- type: accuracy
name: Maltese Test accuracy
value: 26.7
- type: accuracy
name: Ancient Greek Test accuracy
value: 63.5
- type: accuracy
name: Icelandic Test accuracy
value: 64.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 29.7
- type: accuracy
name: Urdu Test accuracy
value: 50.3
- type: accuracy
name: Romanian Test accuracy
value: 63.3
- type: accuracy
name: Persian Test accuracy
value: 61.0
- type: accuracy
name: Apurina Test accuracy
value: 38.4
- type: accuracy
name: Japanese Test accuracy
value: 40.5
- type: accuracy
name: Hungarian Test accuracy
value: 69.4
- type: accuracy
name: Hindi Test accuracy
value: 52.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 32.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 50.1
- type: accuracy
name: Faroese Test accuracy
value: 58.0
- type: accuracy
name: Sanskrit Test accuracy
value: 34.1
- type: accuracy
name: Livvi Test accuracy
value: 65.3
- type: accuracy
name: Arabic Test accuracy
value: 55.9
- type: accuracy
name: Wolof Test accuracy
value: 27.8
- type: accuracy
name: Bulgarian Test accuracy
value: 63.2
- type: accuracy
name: Akuntsu Test accuracy
value: 23.1
- type: accuracy
name: Makurap Test accuracy
value: 17.1
- type: accuracy
name: Kangri Test accuracy
value: 48.8
- type: accuracy
name: Breton Test accuracy
value: 50.8
- type: accuracy
name: Telugu Test accuracy
value: 82.0
- type: accuracy
name: Cantonese Test accuracy
value: 52.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 42.8
- type: accuracy
name: Karelian Test accuracy
value: 61.8
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 55.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 47.0
- type: accuracy
name: Irish Test accuracy
value: 50.1
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 18.6
- type: accuracy
name: Manx Test accuracy
value: 31.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.8
- type: accuracy
name: Afrikaans Test accuracy
value: 66.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 29.9
- type: accuracy
name: Belarusian Test accuracy
value: 65.4
- type: accuracy
name: Serbian Test accuracy
value: 62.6
- type: accuracy
name: Moksha Test accuracy
value: 46.8
- type: accuracy
name: Western Armenian Test accuracy
value: 70.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 47.4
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 77.1
- type: accuracy
name: Uyghur Test accuracy
value: 73.2
- type: accuracy
name: Chukchi Test accuracy
value: 33.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Marathi
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-mt | 5f992379118e3aa5a7077081a4782c5e03481366 | 2022-02-25T09:59:05.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"mt",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-mt | 1 | null | transformers | 30,584 |
---
language:
- mt
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-mt
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 37.5
- type: accuracy
name: Dutch Test accuracy
value: 52.3
- type: accuracy
name: German Test accuracy
value: 51.7
- type: accuracy
name: Italian Test accuracy
value: 54.7
- type: accuracy
name: French Test accuracy
value: 49.1
- type: accuracy
name: Spanish Test accuracy
value: 49.5
- type: accuracy
name: Russian Test accuracy
value: 64.7
- type: accuracy
name: Swedish Test accuracy
value: 52.0
- type: accuracy
name: Norwegian Test accuracy
value: 48.7
- type: accuracy
name: Danish Test accuracy
value: 52.3
- type: accuracy
name: Low Saxon Test accuracy
value: 41.7
- type: accuracy
name: Akkadian Test accuracy
value: 27.7
- type: accuracy
name: Armenian Test accuracy
value: 65.4
- type: accuracy
name: Welsh Test accuracy
value: 50.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 58.1
- type: accuracy
name: Albanian Test accuracy
value: 55.2
- type: accuracy
name: Slovenian Test accuracy
value: 52.3
- type: accuracy
name: Guajajara Test accuracy
value: 30.7
- type: accuracy
name: Kurmanji Test accuracy
value: 53.3
- type: accuracy
name: Turkish Test accuracy
value: 61.0
- type: accuracy
name: Finnish Test accuracy
value: 62.4
- type: accuracy
name: Indonesian Test accuracy
value: 59.4
- type: accuracy
name: Ukrainian Test accuracy
value: 66.6
- type: accuracy
name: Polish Test accuracy
value: 62.6
- type: accuracy
name: Portuguese Test accuracy
value: 54.2
- type: accuracy
name: Kazakh Test accuracy
value: 68.7
- type: accuracy
name: Latin Test accuracy
value: 54.5
- type: accuracy
name: Old French Test accuracy
value: 33.8
- type: accuracy
name: Buryat Test accuracy
value: 51.2
- type: accuracy
name: Kaapor Test accuracy
value: 22.9
- type: accuracy
name: Korean Test accuracy
value: 51.7
- type: accuracy
name: Estonian Test accuracy
value: 62.3
- type: accuracy
name: Croatian Test accuracy
value: 61.4
- type: accuracy
name: Gothic Test accuracy
value: 26.8
- type: accuracy
name: Swiss German Test accuracy
value: 43.6
- type: accuracy
name: Assyrian Test accuracy
value: 26.0
- type: accuracy
name: North Sami Test accuracy
value: 40.4
- type: accuracy
name: Naija Test accuracy
value: 10.9
- type: accuracy
name: Latvian Test accuracy
value: 65.5
- type: accuracy
name: Chinese Test accuracy
value: 47.3
- type: accuracy
name: Tagalog Test accuracy
value: 56.3
- type: accuracy
name: Bambara Test accuracy
value: 28.1
- type: accuracy
name: Lithuanian Test accuracy
value: 67.2
- type: accuracy
name: Galician Test accuracy
value: 54.3
- type: accuracy
name: Vietnamese Test accuracy
value: 55.0
- type: accuracy
name: Greek Test accuracy
value: 52.4
- type: accuracy
name: Catalan Test accuracy
value: 51.2
- type: accuracy
name: Czech Test accuracy
value: 64.6
- type: accuracy
name: Erzya Test accuracy
value: 46.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 39.6
- type: accuracy
name: Thai Test accuracy
value: 44.9
- type: accuracy
name: Marathi Test accuracy
value: 70.6
- type: accuracy
name: Basque Test accuracy
value: 63.4
- type: accuracy
name: Slovak Test accuracy
value: 68.4
- type: accuracy
name: Kiche Test accuracy
value: 33.0
- type: accuracy
name: Yoruba Test accuracy
value: 31.1
- type: accuracy
name: Warlpiri Test accuracy
value: 32.0
- type: accuracy
name: Tamil Test accuracy
value: 73.8
- type: accuracy
name: Maltese Test accuracy
value: 94.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 47.8
- type: accuracy
name: Icelandic Test accuracy
value: 51.3
- type: accuracy
name: Mbya Guarani Test accuracy
value: 34.7
- type: accuracy
name: Urdu Test accuracy
value: 45.9
- type: accuracy
name: Romanian Test accuracy
value: 57.9
- type: accuracy
name: Persian Test accuracy
value: 52.9
- type: accuracy
name: Apurina Test accuracy
value: 38.2
- type: accuracy
name: Japanese Test accuracy
value: 37.8
- type: accuracy
name: Hungarian Test accuracy
value: 61.1
- type: accuracy
name: Hindi Test accuracy
value: 45.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 34.5
- type: accuracy
name: Komi Permyak Test accuracy
value: 48.7
- type: accuracy
name: Faroese Test accuracy
value: 55.1
- type: accuracy
name: Sanskrit Test accuracy
value: 28.3
- type: accuracy
name: Livvi Test accuracy
value: 52.1
- type: accuracy
name: Arabic Test accuracy
value: 63.9
- type: accuracy
name: Wolof Test accuracy
value: 36.6
- type: accuracy
name: Bulgarian Test accuracy
value: 59.0
- type: accuracy
name: Akuntsu Test accuracy
value: 29.6
- type: accuracy
name: Makurap Test accuracy
value: 29.5
- type: accuracy
name: Kangri Test accuracy
value: 39.2
- type: accuracy
name: Breton Test accuracy
value: 49.8
- type: accuracy
name: Telugu Test accuracy
value: 64.6
- type: accuracy
name: Cantonese Test accuracy
value: 46.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 38.1
- type: accuracy
name: Karelian Test accuracy
value: 57.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 62.4
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 61.1
- type: accuracy
name: Komi Zyrian Test accuracy
value: 43.0
- type: accuracy
name: Irish Test accuracy
value: 46.8
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 21.6
- type: accuracy
name: Manx Test accuracy
value: 42.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 41.4
- type: accuracy
name: Afrikaans Test accuracy
value: 49.8
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 32.9
- type: accuracy
name: Belarusian Test accuracy
value: 68.2
- type: accuracy
name: Serbian Test accuracy
value: 60.7
- type: accuracy
name: Moksha Test accuracy
value: 43.5
- type: accuracy
name: Western Armenian Test accuracy
value: 60.2
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 41.5
- type: accuracy
name: Khunsari Test accuracy
value: 43.2
- type: accuracy
name: Hebrew Test accuracy
value: 74.0
- type: accuracy
name: Uyghur Test accuracy
value: 61.9
- type: accuracy
name: Chukchi Test accuracy
value: 48.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Maltese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mt")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mt")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-orv | 0d528e05ed934343892ac7101775c463e4794d33 | 2022-02-25T09:59:10.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"orv",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-orv | 1 | null | transformers | 30,585 |
---
language:
- orv
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-orv
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 79.4
- type: accuracy
name: Dutch Test accuracy
value: 77.8
- type: accuracy
name: German Test accuracy
value: 79.3
- type: accuracy
name: Italian Test accuracy
value: 77.5
- type: accuracy
name: French Test accuracy
value: 75.2
- type: accuracy
name: Spanish Test accuracy
value: 77.2
- type: accuracy
name: Russian Test accuracy
value: 87.9
- type: accuracy
name: Swedish Test accuracy
value: 83.0
- type: accuracy
name: Norwegian Test accuracy
value: 78.6
- type: accuracy
name: Danish Test accuracy
value: 82.9
- type: accuracy
name: Low Saxon Test accuracy
value: 58.9
- type: accuracy
name: Akkadian Test accuracy
value: 41.8
- type: accuracy
name: Armenian Test accuracy
value: 82.7
- type: accuracy
name: Welsh Test accuracy
value: 64.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 91.0
- type: accuracy
name: Albanian Test accuracy
value: 73.4
- type: accuracy
name: Slovenian Test accuracy
value: 73.8
- type: accuracy
name: Guajajara Test accuracy
value: 41.7
- type: accuracy
name: Kurmanji Test accuracy
value: 76.7
- type: accuracy
name: Turkish Test accuracy
value: 73.5
- type: accuracy
name: Finnish Test accuracy
value: 83.0
- type: accuracy
name: Indonesian Test accuracy
value: 78.9
- type: accuracy
name: Ukrainian Test accuracy
value: 86.7
- type: accuracy
name: Polish Test accuracy
value: 85.5
- type: accuracy
name: Portuguese Test accuracy
value: 79.5
- type: accuracy
name: Kazakh Test accuracy
value: 79.7
- type: accuracy
name: Latin Test accuracy
value: 80.9
- type: accuracy
name: Old French Test accuracy
value: 60.5
- type: accuracy
name: Buryat Test accuracy
value: 59.8
- type: accuracy
name: Kaapor Test accuracy
value: 27.1
- type: accuracy
name: Korean Test accuracy
value: 61.0
- type: accuracy
name: Estonian Test accuracy
value: 83.9
- type: accuracy
name: Croatian Test accuracy
value: 84.7
- type: accuracy
name: Gothic Test accuracy
value: 33.1
- type: accuracy
name: Swiss German Test accuracy
value: 53.5
- type: accuracy
name: Assyrian Test accuracy
value: 15.7
- type: accuracy
name: North Sami Test accuracy
value: 39.9
- type: accuracy
name: Naija Test accuracy
value: 41.9
- type: accuracy
name: Latvian Test accuracy
value: 85.7
- type: accuracy
name: Chinese Test accuracy
value: 42.7
- type: accuracy
name: Tagalog Test accuracy
value: 73.5
- type: accuracy
name: Bambara Test accuracy
value: 29.5
- type: accuracy
name: Lithuanian Test accuracy
value: 86.1
- type: accuracy
name: Galician Test accuracy
value: 77.7
- type: accuracy
name: Vietnamese Test accuracy
value: 64.8
- type: accuracy
name: Greek Test accuracy
value: 73.8
- type: accuracy
name: Catalan Test accuracy
value: 74.2
- type: accuracy
name: Czech Test accuracy
value: 85.0
- type: accuracy
name: Erzya Test accuracy
value: 46.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 56.8
- type: accuracy
name: Thai Test accuracy
value: 60.6
- type: accuracy
name: Marathi Test accuracy
value: 84.0
- type: accuracy
name: Basque Test accuracy
value: 77.2
- type: accuracy
name: Slovak Test accuracy
value: 84.3
- type: accuracy
name: Kiche Test accuracy
value: 35.3
- type: accuracy
name: Yoruba Test accuracy
value: 29.9
- type: accuracy
name: Warlpiri Test accuracy
value: 33.6
- type: accuracy
name: Tamil Test accuracy
value: 84.3
- type: accuracy
name: Maltese Test accuracy
value: 32.0
- type: accuracy
name: Ancient Greek Test accuracy
value: 65.7
- type: accuracy
name: Icelandic Test accuracy
value: 81.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.2
- type: accuracy
name: Urdu Test accuracy
value: 66.2
- type: accuracy
name: Romanian Test accuracy
value: 80.9
- type: accuracy
name: Persian Test accuracy
value: 74.6
- type: accuracy
name: Apurina Test accuracy
value: 44.6
- type: accuracy
name: Japanese Test accuracy
value: 35.7
- type: accuracy
name: Hungarian Test accuracy
value: 73.3
- type: accuracy
name: Hindi Test accuracy
value: 75.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 41.5
- type: accuracy
name: Komi Permyak Test accuracy
value: 49.0
- type: accuracy
name: Faroese Test accuracy
value: 78.3
- type: accuracy
name: Sanskrit Test accuracy
value: 43.3
- type: accuracy
name: Livvi Test accuracy
value: 70.2
- type: accuracy
name: Arabic Test accuracy
value: 79.8
- type: accuracy
name: Wolof Test accuracy
value: 39.8
- type: accuracy
name: Bulgarian Test accuracy
value: 85.8
- type: accuracy
name: Akuntsu Test accuracy
value: 36.5
- type: accuracy
name: Makurap Test accuracy
value: 14.4
- type: accuracy
name: Kangri Test accuracy
value: 52.0
- type: accuracy
name: Breton Test accuracy
value: 58.1
- type: accuracy
name: Telugu Test accuracy
value: 79.9
- type: accuracy
name: Cantonese Test accuracy
value: 50.8
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 78.2
- type: accuracy
name: Karelian Test accuracy
value: 73.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 76.0
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 70.0
- type: accuracy
name: Komi Zyrian Test accuracy
value: 43.1
- type: accuracy
name: Irish Test accuracy
value: 61.1
- type: accuracy
name: Nayini Test accuracy
value: 53.8
- type: accuracy
name: Munduruku Test accuracy
value: 26.4
- type: accuracy
name: Manx Test accuracy
value: 44.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 45.2
- type: accuracy
name: Afrikaans Test accuracy
value: 76.9
- type: accuracy
name: Old Turkish Test accuracy
value: 2.7
- type: accuracy
name: Tupinamba Test accuracy
value: 39.0
- type: accuracy
name: Belarusian Test accuracy
value: 89.5
- type: accuracy
name: Serbian Test accuracy
value: 85.1
- type: accuracy
name: Moksha Test accuracy
value: 42.8
- type: accuracy
name: Western Armenian Test accuracy
value: 77.0
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 51.6
- type: accuracy
name: Khunsari Test accuracy
value: 54.1
- type: accuracy
name: Hebrew Test accuracy
value: 85.4
- type: accuracy
name: Uyghur Test accuracy
value: 74.4
- type: accuracy
name: Chukchi Test accuracy
value: 34.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old East Slavic
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-orv")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-orv")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-pcm | a0e4eddb78b41c3b5c5b8616a7aeb926c5f89b96 | 2022-02-25T09:59:11.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"pcm",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-pcm | 1 | null | transformers | 30,586 |
---
language:
- pcm
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-pcm
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 77.2
- type: accuracy
name: Dutch Test accuracy
value: 75.2
- type: accuracy
name: German Test accuracy
value: 73.2
- type: accuracy
name: Italian Test accuracy
value: 68.9
- type: accuracy
name: French Test accuracy
value: 74.0
- type: accuracy
name: Spanish Test accuracy
value: 75.1
- type: accuracy
name: Russian Test accuracy
value: 70.3
- type: accuracy
name: Swedish Test accuracy
value: 78.9
- type: accuracy
name: Norwegian Test accuracy
value: 74.3
- type: accuracy
name: Danish Test accuracy
value: 73.4
- type: accuracy
name: Low Saxon Test accuracy
value: 37.9
- type: accuracy
name: Akkadian Test accuracy
value: 28.0
- type: accuracy
name: Armenian Test accuracy
value: 65.4
- type: accuracy
name: Welsh Test accuracy
value: 59.7
- type: accuracy
name: Old East Slavic Test accuracy
value: 61.0
- type: accuracy
name: Albanian Test accuracy
value: 66.1
- type: accuracy
name: Slovenian Test accuracy
value: 67.6
- type: accuracy
name: Guajajara Test accuracy
value: 16.1
- type: accuracy
name: Kurmanji Test accuracy
value: 54.8
- type: accuracy
name: Turkish Test accuracy
value: 58.2
- type: accuracy
name: Finnish Test accuracy
value: 67.4
- type: accuracy
name: Indonesian Test accuracy
value: 68.5
- type: accuracy
name: Ukrainian Test accuracy
value: 68.1
- type: accuracy
name: Polish Test accuracy
value: 68.8
- type: accuracy
name: Portuguese Test accuracy
value: 72.9
- type: accuracy
name: Kazakh Test accuracy
value: 60.1
- type: accuracy
name: Latin Test accuracy
value: 64.3
- type: accuracy
name: Old French Test accuracy
value: 51.1
- type: accuracy
name: Buryat Test accuracy
value: 38.9
- type: accuracy
name: Kaapor Test accuracy
value: 16.7
- type: accuracy
name: Korean Test accuracy
value: 52.4
- type: accuracy
name: Estonian Test accuracy
value: 68.3
- type: accuracy
name: Croatian Test accuracy
value: 73.0
- type: accuracy
name: Gothic Test accuracy
value: 21.4
- type: accuracy
name: Swiss German Test accuracy
value: 33.4
- type: accuracy
name: Assyrian Test accuracy
value: 0.0
- type: accuracy
name: North Sami Test accuracy
value: 24.3
- type: accuracy
name: Naija Test accuracy
value: 97.9
- type: accuracy
name: Latvian Test accuracy
value: 66.3
- type: accuracy
name: Chinese Test accuracy
value: 34.3
- type: accuracy
name: Tagalog Test accuracy
value: 49.9
- type: accuracy
name: Bambara Test accuracy
value: 16.7
- type: accuracy
name: Lithuanian Test accuracy
value: 65.7
- type: accuracy
name: Galician Test accuracy
value: 72.4
- type: accuracy
name: Vietnamese Test accuracy
value: 54.3
- type: accuracy
name: Greek Test accuracy
value: 73.3
- type: accuracy
name: Catalan Test accuracy
value: 73.6
- type: accuracy
name: Czech Test accuracy
value: 69.5
- type: accuracy
name: Erzya Test accuracy
value: 22.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 36.6
- type: accuracy
name: Thai Test accuracy
value: 65.4
- type: accuracy
name: Marathi Test accuracy
value: 50.3
- type: accuracy
name: Basque Test accuracy
value: 58.5
- type: accuracy
name: Slovak Test accuracy
value: 70.4
- type: accuracy
name: Kiche Test accuracy
value: 8.0
- type: accuracy
name: Yoruba Test accuracy
value: 6.1
- type: accuracy
name: Warlpiri Test accuracy
value: 15.4
- type: accuracy
name: Tamil Test accuracy
value: 60.1
- type: accuracy
name: Maltese Test accuracy
value: 12.2
- type: accuracy
name: Ancient Greek Test accuracy
value: 45.8
- type: accuracy
name: Icelandic Test accuracy
value: 72.5
- type: accuracy
name: Mbya Guarani Test accuracy
value: 11.4
- type: accuracy
name: Urdu Test accuracy
value: 59.1
- type: accuracy
name: Romanian Test accuracy
value: 64.8
- type: accuracy
name: Persian Test accuracy
value: 67.2
- type: accuracy
name: Apurina Test accuracy
value: 15.5
- type: accuracy
name: Japanese Test accuracy
value: 26.1
- type: accuracy
name: Hungarian Test accuracy
value: 68.6
- type: accuracy
name: Hindi Test accuracy
value: 65.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 21.2
- type: accuracy
name: Faroese Test accuracy
value: 61.6
- type: accuracy
name: Sanskrit Test accuracy
value: 25.6
- type: accuracy
name: Livvi Test accuracy
value: 39.7
- type: accuracy
name: Arabic Test accuracy
value: 63.5
- type: accuracy
name: Wolof Test accuracy
value: 15.9
- type: accuracy
name: Bulgarian Test accuracy
value: 74.6
- type: accuracy
name: Akuntsu Test accuracy
value: 26.5
- type: accuracy
name: Makurap Test accuracy
value: 11.6
- type: accuracy
name: Kangri Test accuracy
value: 27.8
- type: accuracy
name: Breton Test accuracy
value: 46.6
- type: accuracy
name: Telugu Test accuracy
value: 59.4
- type: accuracy
name: Cantonese Test accuracy
value: 30.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 36.7
- type: accuracy
name: Karelian Test accuracy
value: 45.9
- type: accuracy
name: Upper Sorbian Test accuracy
value: 49.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 42.5
- type: accuracy
name: Komi Zyrian Test accuracy
value: 18.4
- type: accuracy
name: Irish Test accuracy
value: 48.3
- type: accuracy
name: Nayini Test accuracy
value: 24.4
- type: accuracy
name: Munduruku Test accuracy
value: 16.1
- type: accuracy
name: Manx Test accuracy
value: 14.7
- type: accuracy
name: Skolt Sami Test accuracy
value: 5.4
- type: accuracy
name: Afrikaans Test accuracy
value: 76.5
- type: accuracy
name: Old Turkish Test accuracy
value: 0.0
- type: accuracy
name: Tupinamba Test accuracy
value: 16.3
- type: accuracy
name: Belarusian Test accuracy
value: 70.7
- type: accuracy
name: Serbian Test accuracy
value: 74.8
- type: accuracy
name: Moksha Test accuracy
value: 24.1
- type: accuracy
name: Western Armenian Test accuracy
value: 59.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 45.4
- type: accuracy
name: Khunsari Test accuracy
value: 21.6
- type: accuracy
name: Hebrew Test accuracy
value: 65.6
- type: accuracy
name: Uyghur Test accuracy
value: 55.0
- type: accuracy
name: Chukchi Test accuracy
value: 12.6
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Naija
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pcm")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pcm")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-sl | 1393c2c2e13e15b8d2feb942f8eb828450e5162f | 2022-02-25T09:59:22.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sl",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-sl | 1 | null | transformers | 30,587 |
---
language:
- sl
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-sl
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 81.7
- type: accuracy
name: Dutch Test accuracy
value: 83.1
- type: accuracy
name: German Test accuracy
value: 81.2
- type: accuracy
name: Italian Test accuracy
value: 81.3
- type: accuracy
name: French Test accuracy
value: 79.9
- type: accuracy
name: Spanish Test accuracy
value: 84.9
- type: accuracy
name: Russian Test accuracy
value: 91.5
- type: accuracy
name: Swedish Test accuracy
value: 86.0
- type: accuracy
name: Norwegian Test accuracy
value: 78.4
- type: accuracy
name: Danish Test accuracy
value: 83.7
- type: accuracy
name: Low Saxon Test accuracy
value: 41.9
- type: accuracy
name: Akkadian Test accuracy
value: 17.3
- type: accuracy
name: Armenian Test accuracy
value: 84.3
- type: accuracy
name: Welsh Test accuracy
value: 65.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 74.1
- type: accuracy
name: Albanian Test accuracy
value: 76.6
- type: accuracy
name: Slovenian Test accuracy
value: 97.6
- type: accuracy
name: Guajajara Test accuracy
value: 22.5
- type: accuracy
name: Kurmanji Test accuracy
value: 75.7
- type: accuracy
name: Turkish Test accuracy
value: 75.4
- type: accuracy
name: Finnish Test accuracy
value: 81.2
- type: accuracy
name: Indonesian Test accuracy
value: 81.8
- type: accuracy
name: Ukrainian Test accuracy
value: 92.6
- type: accuracy
name: Polish Test accuracy
value: 93.2
- type: accuracy
name: Portuguese Test accuracy
value: 84.0
- type: accuracy
name: Kazakh Test accuracy
value: 79.4
- type: accuracy
name: Latin Test accuracy
value: 76.7
- type: accuracy
name: Old French Test accuracy
value: 40.3
- type: accuracy
name: Buryat Test accuracy
value: 53.1
- type: accuracy
name: Kaapor Test accuracy
value: 11.2
- type: accuracy
name: Korean Test accuracy
value: 61.9
- type: accuracy
name: Estonian Test accuracy
value: 82.2
- type: accuracy
name: Croatian Test accuracy
value: 93.1
- type: accuracy
name: Gothic Test accuracy
value: 6.2
- type: accuracy
name: Swiss German Test accuracy
value: 40.7
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 22.5
- type: accuracy
name: Naija Test accuracy
value: 33.9
- type: accuracy
name: Latvian Test accuracy
value: 86.0
- type: accuracy
name: Chinese Test accuracy
value: 39.7
- type: accuracy
name: Tagalog Test accuracy
value: 72.0
- type: accuracy
name: Bambara Test accuracy
value: 23.5
- type: accuracy
name: Lithuanian Test accuracy
value: 87.3
- type: accuracy
name: Galician Test accuracy
value: 82.5
- type: accuracy
name: Vietnamese Test accuracy
value: 67.3
- type: accuracy
name: Greek Test accuracy
value: 79.7
- type: accuracy
name: Catalan Test accuracy
value: 79.0
- type: accuracy
name: Czech Test accuracy
value: 94.1
- type: accuracy
name: Erzya Test accuracy
value: 40.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 46.5
- type: accuracy
name: Thai Test accuracy
value: 53.2
- type: accuracy
name: Marathi Test accuracy
value: 87.7
- type: accuracy
name: Basque Test accuracy
value: 74.6
- type: accuracy
name: Slovak Test accuracy
value: 95.5
- type: accuracy
name: Kiche Test accuracy
value: 24.7
- type: accuracy
name: Yoruba Test accuracy
value: 17.1
- type: accuracy
name: Warlpiri Test accuracy
value: 27.5
- type: accuracy
name: Tamil Test accuracy
value: 83.4
- type: accuracy
name: Maltese Test accuracy
value: 18.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 60.8
- type: accuracy
name: Icelandic Test accuracy
value: 80.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 23.7
- type: accuracy
name: Urdu Test accuracy
value: 61.6
- type: accuracy
name: Romanian Test accuracy
value: 82.4
- type: accuracy
name: Persian Test accuracy
value: 78.6
- type: accuracy
name: Apurina Test accuracy
value: 29.2
- type: accuracy
name: Japanese Test accuracy
value: 25.5
- type: accuracy
name: Hungarian Test accuracy
value: 74.6
- type: accuracy
name: Hindi Test accuracy
value: 67.4
- type: accuracy
name: Classical Chinese Test accuracy
value: 14.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 40.3
- type: accuracy
name: Faroese Test accuracy
value: 75.0
- type: accuracy
name: Sanskrit Test accuracy
value: 14.3
- type: accuracy
name: Livvi Test accuracy
value: 58.2
- type: accuracy
name: Arabic Test accuracy
value: 79.8
- type: accuracy
name: Wolof Test accuracy
value: 24.7
- type: accuracy
name: Bulgarian Test accuracy
value: 90.4
- type: accuracy
name: Akuntsu Test accuracy
value: 20.6
- type: accuracy
name: Makurap Test accuracy
value: 6.2
- type: accuracy
name: Kangri Test accuracy
value: 44.2
- type: accuracy
name: Breton Test accuracy
value: 53.2
- type: accuracy
name: Telugu Test accuracy
value: 83.4
- type: accuracy
name: Cantonese Test accuracy
value: 48.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 41.9
- type: accuracy
name: Karelian Test accuracy
value: 64.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 79.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 67.2
- type: accuracy
name: Komi Zyrian Test accuracy
value: 33.3
- type: accuracy
name: Irish Test accuracy
value: 63.0
- type: accuracy
name: Nayini Test accuracy
value: 32.1
- type: accuracy
name: Munduruku Test accuracy
value: 10.1
- type: accuracy
name: Manx Test accuracy
value: 22.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.4
- type: accuracy
name: Afrikaans Test accuracy
value: 74.0
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 22.5
- type: accuracy
name: Belarusian Test accuracy
value: 90.2
- type: accuracy
name: Serbian Test accuracy
value: 94.4
- type: accuracy
name: Moksha Test accuracy
value: 37.6
- type: accuracy
name: Western Armenian Test accuracy
value: 73.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 55.0
- type: accuracy
name: Khunsari Test accuracy
value: 32.4
- type: accuracy
name: Hebrew Test accuracy
value: 81.2
- type: accuracy
name: Uyghur Test accuracy
value: 72.1
- type: accuracy
name: Chukchi Test accuracy
value: 30.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Slovenian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-sme | a115d0fea7ebf875e7b2d3d7537b58bfd2a71e43 | 2022-02-25T09:59:24.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sme",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-sme | 1 | null | transformers | 30,588 |
---
language:
- sme
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-sme
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 48.1
- type: accuracy
name: Dutch Test accuracy
value: 49.5
- type: accuracy
name: German Test accuracy
value: 40.4
- type: accuracy
name: Italian Test accuracy
value: 48.9
- type: accuracy
name: French Test accuracy
value: 43.9
- type: accuracy
name: Spanish Test accuracy
value: 47.1
- type: accuracy
name: Russian Test accuracy
value: 57.3
- type: accuracy
name: Swedish Test accuracy
value: 47.9
- type: accuracy
name: Norwegian Test accuracy
value: 45.5
- type: accuracy
name: Danish Test accuracy
value: 50.7
- type: accuracy
name: Low Saxon Test accuracy
value: 38.7
- type: accuracy
name: Akkadian Test accuracy
value: 29.6
- type: accuracy
name: Armenian Test accuracy
value: 63.0
- type: accuracy
name: Welsh Test accuracy
value: 36.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 46.0
- type: accuracy
name: Albanian Test accuracy
value: 47.8
- type: accuracy
name: Slovenian Test accuracy
value: 45.5
- type: accuracy
name: Guajajara Test accuracy
value: 31.8
- type: accuracy
name: Kurmanji Test accuracy
value: 42.5
- type: accuracy
name: Turkish Test accuracy
value: 56.3
- type: accuracy
name: Finnish Test accuracy
value: 64.7
- type: accuracy
name: Indonesian Test accuracy
value: 59.3
- type: accuracy
name: Ukrainian Test accuracy
value: 56.6
- type: accuracy
name: Polish Test accuracy
value: 55.0
- type: accuracy
name: Portuguese Test accuracy
value: 52.0
- type: accuracy
name: Kazakh Test accuracy
value: 62.2
- type: accuracy
name: Latin Test accuracy
value: 50.3
- type: accuracy
name: Old French Test accuracy
value: 30.8
- type: accuracy
name: Buryat Test accuracy
value: 50.6
- type: accuracy
name: Kaapor Test accuracy
value: 18.3
- type: accuracy
name: Korean Test accuracy
value: 51.7
- type: accuracy
name: Estonian Test accuracy
value: 65.2
- type: accuracy
name: Croatian Test accuracy
value: 55.9
- type: accuracy
name: Gothic Test accuracy
value: 31.1
- type: accuracy
name: Swiss German Test accuracy
value: 37.1
- type: accuracy
name: Assyrian Test accuracy
value: 24.1
- type: accuracy
name: North Sami Test accuracy
value: 87.7
- type: accuracy
name: Naija Test accuracy
value: 19.8
- type: accuracy
name: Latvian Test accuracy
value: 64.2
- type: accuracy
name: Chinese Test accuracy
value: 33.9
- type: accuracy
name: Tagalog Test accuracy
value: 46.3
- type: accuracy
name: Bambara Test accuracy
value: 30.2
- type: accuracy
name: Lithuanian Test accuracy
value: 63.5
- type: accuracy
name: Galician Test accuracy
value: 48.5
- type: accuracy
name: Vietnamese Test accuracy
value: 46.0
- type: accuracy
name: Greek Test accuracy
value: 45.6
- type: accuracy
name: Catalan Test accuracy
value: 45.8
- type: accuracy
name: Czech Test accuracy
value: 54.5
- type: accuracy
name: Erzya Test accuracy
value: 45.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 34.3
- type: accuracy
name: Thai Test accuracy
value: 23.9
- type: accuracy
name: Marathi Test accuracy
value: 67.5
- type: accuracy
name: Basque Test accuracy
value: 59.6
- type: accuracy
name: Slovak Test accuracy
value: 57.7
- type: accuracy
name: Kiche Test accuracy
value: 35.6
- type: accuracy
name: Yoruba Test accuracy
value: 31.0
- type: accuracy
name: Warlpiri Test accuracy
value: 43.3
- type: accuracy
name: Tamil Test accuracy
value: 60.4
- type: accuracy
name: Maltese Test accuracy
value: 34.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 41.8
- type: accuracy
name: Icelandic Test accuracy
value: 47.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 36.0
- type: accuracy
name: Urdu Test accuracy
value: 36.8
- type: accuracy
name: Romanian Test accuracy
value: 50.1
- type: accuracy
name: Persian Test accuracy
value: 45.8
- type: accuracy
name: Apurina Test accuracy
value: 48.4
- type: accuracy
name: Japanese Test accuracy
value: 30.6
- type: accuracy
name: Hungarian Test accuracy
value: 54.7
- type: accuracy
name: Hindi Test accuracy
value: 39.5
- type: accuracy
name: Classical Chinese Test accuracy
value: 18.3
- type: accuracy
name: Komi Permyak Test accuracy
value: 51.1
- type: accuracy
name: Faroese Test accuracy
value: 52.2
- type: accuracy
name: Sanskrit Test accuracy
value: 28.4
- type: accuracy
name: Livvi Test accuracy
value: 57.7
- type: accuracy
name: Arabic Test accuracy
value: 40.5
- type: accuracy
name: Wolof Test accuracy
value: 36.2
- type: accuracy
name: Bulgarian Test accuracy
value: 54.1
- type: accuracy
name: Akuntsu Test accuracy
value: 31.6
- type: accuracy
name: Makurap Test accuracy
value: 17.8
- type: accuracy
name: Kangri Test accuracy
value: 33.8
- type: accuracy
name: Breton Test accuracy
value: 47.0
- type: accuracy
name: Telugu Test accuracy
value: 58.7
- type: accuracy
name: Cantonese Test accuracy
value: 36.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 35.1
- type: accuracy
name: Karelian Test accuracy
value: 57.5
- type: accuracy
name: Upper Sorbian Test accuracy
value: 51.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 44.5
- type: accuracy
name: Komi Zyrian Test accuracy
value: 42.2
- type: accuracy
name: Irish Test accuracy
value: 34.8
- type: accuracy
name: Nayini Test accuracy
value: 41.0
- type: accuracy
name: Munduruku Test accuracy
value: 21.6
- type: accuracy
name: Manx Test accuracy
value: 28.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 49.2
- type: accuracy
name: Afrikaans Test accuracy
value: 43.2
- type: accuracy
name: Old Turkish Test accuracy
value: 38.9
- type: accuracy
name: Tupinamba Test accuracy
value: 44.2
- type: accuracy
name: Belarusian Test accuracy
value: 58.7
- type: accuracy
name: Serbian Test accuracy
value: 55.9
- type: accuracy
name: Moksha Test accuracy
value: 45.0
- type: accuracy
name: Western Armenian Test accuracy
value: 56.1
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 31.0
- type: accuracy
name: Khunsari Test accuracy
value: 27.0
- type: accuracy
name: Hebrew Test accuracy
value: 61.5
- type: accuracy
name: Uyghur Test accuracy
value: 61.4
- type: accuracy
name: Chukchi Test accuracy
value: 41.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: North Sami
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-sv | 2f9a7219927445d3fb837455a2d9a593fa8d9201 | 2022-02-25T09:59:27.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sv",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-sv | 1 | null | transformers | 30,589 |
---
language:
- sv
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-sv
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 92.3
- type: accuracy
name: Dutch Test accuracy
value: 90.0
- type: accuracy
name: German Test accuracy
value: 91.1
- type: accuracy
name: Italian Test accuracy
value: 88.0
- type: accuracy
name: French Test accuracy
value: 88.2
- type: accuracy
name: Spanish Test accuracy
value: 91.1
- type: accuracy
name: Russian Test accuracy
value: 91.4
- type: accuracy
name: Swedish Test accuracy
value: 97.9
- type: accuracy
name: Norwegian Test accuracy
value: 89.7
- type: accuracy
name: Danish Test accuracy
value: 92.9
- type: accuracy
name: Low Saxon Test accuracy
value: 57.4
- type: accuracy
name: Akkadian Test accuracy
value: 40.4
- type: accuracy
name: Armenian Test accuracy
value: 87.5
- type: accuracy
name: Welsh Test accuracy
value: 69.6
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.2
- type: accuracy
name: Albanian Test accuracy
value: 80.3
- type: accuracy
name: Slovenian Test accuracy
value: 81.0
- type: accuracy
name: Guajajara Test accuracy
value: 35.1
- type: accuracy
name: Kurmanji Test accuracy
value: 77.3
- type: accuracy
name: Turkish Test accuracy
value: 79.2
- type: accuracy
name: Finnish Test accuracy
value: 87.0
- type: accuracy
name: Indonesian Test accuracy
value: 84.2
- type: accuracy
name: Ukrainian Test accuracy
value: 90.4
- type: accuracy
name: Polish Test accuracy
value: 88.9
- type: accuracy
name: Portuguese Test accuracy
value: 90.1
- type: accuracy
name: Kazakh Test accuracy
value: 83.4
- type: accuracy
name: Latin Test accuracy
value: 79.1
- type: accuracy
name: Old French Test accuracy
value: 62.6
- type: accuracy
name: Buryat Test accuracy
value: 63.0
- type: accuracy
name: Kaapor Test accuracy
value: 20.8
- type: accuracy
name: Korean Test accuracy
value: 64.3
- type: accuracy
name: Estonian Test accuracy
value: 89.6
- type: accuracy
name: Croatian Test accuracy
value: 90.8
- type: accuracy
name: Gothic Test accuracy
value: 26.0
- type: accuracy
name: Swiss German Test accuracy
value: 51.8
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 45.4
- type: accuracy
name: Naija Test accuracy
value: 48.1
- type: accuracy
name: Latvian Test accuracy
value: 87.1
- type: accuracy
name: Chinese Test accuracy
value: 48.5
- type: accuracy
name: Tagalog Test accuracy
value: 72.3
- type: accuracy
name: Bambara Test accuracy
value: 31.8
- type: accuracy
name: Lithuanian Test accuracy
value: 86.2
- type: accuracy
name: Galician Test accuracy
value: 88.1
- type: accuracy
name: Vietnamese Test accuracy
value: 66.3
- type: accuracy
name: Greek Test accuracy
value: 88.1
- type: accuracy
name: Catalan Test accuracy
value: 90.1
- type: accuracy
name: Czech Test accuracy
value: 90.1
- type: accuracy
name: Erzya Test accuracy
value: 50.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 51.7
- type: accuracy
name: Thai Test accuracy
value: 66.4
- type: accuracy
name: Marathi Test accuracy
value: 86.5
- type: accuracy
name: Basque Test accuracy
value: 76.4
- type: accuracy
name: Slovak Test accuracy
value: 90.5
- type: accuracy
name: Kiche Test accuracy
value: 42.4
- type: accuracy
name: Yoruba Test accuracy
value: 31.2
- type: accuracy
name: Warlpiri Test accuracy
value: 42.5
- type: accuracy
name: Tamil Test accuracy
value: 85.3
- type: accuracy
name: Maltese Test accuracy
value: 30.6
- type: accuracy
name: Ancient Greek Test accuracy
value: 63.0
- type: accuracy
name: Icelandic Test accuracy
value: 85.3
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.3
- type: accuracy
name: Urdu Test accuracy
value: 67.6
- type: accuracy
name: Romanian Test accuracy
value: 85.5
- type: accuracy
name: Persian Test accuracy
value: 77.4
- type: accuracy
name: Apurina Test accuracy
value: 47.4
- type: accuracy
name: Japanese Test accuracy
value: 35.5
- type: accuracy
name: Hungarian Test accuracy
value: 87.1
- type: accuracy
name: Hindi Test accuracy
value: 75.1
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.4
- type: accuracy
name: Faroese Test accuracy
value: 80.3
- type: accuracy
name: Sanskrit Test accuracy
value: 40.7
- type: accuracy
name: Livvi Test accuracy
value: 68.5
- type: accuracy
name: Arabic Test accuracy
value: 82.0
- type: accuracy
name: Wolof Test accuracy
value: 37.4
- type: accuracy
name: Bulgarian Test accuracy
value: 92.9
- type: accuracy
name: Akuntsu Test accuracy
value: 41.1
- type: accuracy
name: Makurap Test accuracy
value: 22.6
- type: accuracy
name: Kangri Test accuracy
value: 47.1
- type: accuracy
name: Breton Test accuracy
value: 64.3
- type: accuracy
name: Telugu Test accuracy
value: 84.9
- type: accuracy
name: Cantonese Test accuracy
value: 48.8
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.1
- type: accuracy
name: Karelian Test accuracy
value: 74.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 77.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.6
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.5
- type: accuracy
name: Irish Test accuracy
value: 70.5
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 24.3
- type: accuracy
name: Manx Test accuracy
value: 34.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 42.0
- type: accuracy
name: Afrikaans Test accuracy
value: 92.1
- type: accuracy
name: Old Turkish Test accuracy
value: 40.3
- type: accuracy
name: Tupinamba Test accuracy
value: 41.4
- type: accuracy
name: Belarusian Test accuracy
value: 89.8
- type: accuracy
name: Serbian Test accuracy
value: 91.5
- type: accuracy
name: Moksha Test accuracy
value: 46.7
- type: accuracy
name: Western Armenian Test accuracy
value: 80.3
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 60.4
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 87.5
- type: accuracy
name: Uyghur Test accuracy
value: 76.9
- type: accuracy
name: Chukchi Test accuracy
value: 35.9
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Swedish
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sv")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sv")
```
|
cammy/t5-base-finetuned-weaksup-1000 | 87d55b016c71958c857870d31ffd7f5fddc3ba95 | 2022-02-24T10:26:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/t5-base-finetuned-weaksup-1000 | 1 | null | transformers | 30,590 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-weaksup-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-weaksup-1000
This model is a fine-tuned version of [cammy/t5-base-finetuned-weaksup-1000](https://huggingface.co/cammy/t5-base-finetuned-weaksup-1000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6699
- Rouge1: 22.2079
- Rouge2: 9.54
- Rougel: 19.9593
- Rougelsum: 20.2524
- Gen Len: 18.17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.6257 | 1.0 | 1000 | 1.6699 | 22.2079 | 9.54 | 19.9593 | 20.2524 | 18.17 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lhbit20010120/distilgpt2-finetuned-wikitext2 | 6571f4ca3ee7f706c407895af5ecb8bef3f63c17 | 2022-02-24T10:45:51.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | lhbit20010120 | null | lhbit20010120/distilgpt2-finetuned-wikitext2 | 1 | null | transformers | 30,591 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.633 | 2.0 | 4668 | 3.6455 |
| 3.6078 | 3.0 | 7002 | 3.6423 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
izzy-lazerson/wav2vec2-base-timit-demo-colab | 24a34e63191a7fefcaf4db767ffc3daa60d95dc9 | 2022-02-24T13:44:39.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | izzy-lazerson | null | izzy-lazerson/wav2vec2-base-timit-demo-colab | 1 | null | transformers | 30,592 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4545
- Wer: 0.3450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3801 | 4.0 | 500 | 1.1501 | 0.8820 |
| 0.561 | 8.0 | 1000 | 0.4583 | 0.4211 |
| 0.2198 | 12.0 | 1500 | 0.4467 | 0.3997 |
| 0.1255 | 16.0 | 2000 | 0.4390 | 0.3677 |
| 0.0862 | 20.0 | 2500 | 0.4934 | 0.3603 |
| 0.0617 | 24.0 | 3000 | 0.4641 | 0.3549 |
| 0.0465 | 28.0 | 3500 | 0.4545 | 0.3450 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
andresestevez/bert-finetuned-squad-accelerate | 7338aab55a84d553b9a4a41f9b46f9e20b577333 | 2022-03-02T03:12:15.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | andresestevez | null | andresestevez/bert-finetuned-squad-accelerate | 1 | null | transformers | 30,593 | Entry not found |
Francesco/resnet50 | 60efd8ae5dc9bf423ec7b8e62c61b1b8284536a2 | 2022-03-01T15:04:37.000Z | [
"pytorch",
"resnet",
"image-classification",
"transformers"
] | image-classification | false | Francesco | null | Francesco/resnet50 | 1 | null | transformers | 30,594 | Entry not found |
Francesco/resnet101 | d5b89071222d9a3dccfb174fa5c83dad26c82e7d | 2022-03-01T15:06:55.000Z | [
"pytorch",
"resnet",
"image-classification",
"transformers"
] | image-classification | false | Francesco | null | Francesco/resnet101 | 1 | null | transformers | 30,595 | Entry not found |
mrm8488/biomedtra-base-es | e33448a03f869e96e836aa23d55b8d85b984b1c5 | 2022-03-25T16:58:53.000Z | [
"pytorch",
"tensorboard",
"electra",
"pretraining",
"transformers"
] | null | false | mrm8488 | null | mrm8488/biomedtra-base-es | 1 | null | transformers | 30,596 | Entry not found |
Shakaw/DialoGPT-small-spongebot | 939412916bdb9d179228ee9237b54ee51b611c7a | 2022-02-24T13:34:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Shakaw | null | Shakaw/DialoGPT-small-spongebot | 1 | null | transformers | 30,597 | ---
tags:
- conversational
---
# Spongebob DialoGPT model |
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6 | f418599aa60108afe24eb219cb6567e0d7193c2b | 2022-02-24T21:09:03.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6 | 1 | null | transformers | 30,598 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8 | 99169292576e90b4b3ff8904c0dfd7aae1835b5a | 2022-02-24T21:24:10.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | anas-awadalla | null | anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8 | 1 | null | transformers | 30,599 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.