modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Contrastive-Tension/BERT-Large-CT-STSb | 00c60a5feb749b2d2eb550813d954b0d4308e25d | 2021-05-18T17:56:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Large-CT-STSb | 3 | null | transformers | 20,600 | Entry not found |
Culmenus/opus-mt-de-is-finetuned-de-to-is | faa1dd9beea86f2d6fefa3dc52bb4219072ea87a | 2021-11-11T02:12:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Culmenus | null | Culmenus/opus-mt-de-is-finetuned-de-to-is | 3 | null | transformers | 20,601 | Entry not found |
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2 | 1808f19b2b6b032ef95b384e55b07200c6c1839a | 2021-11-11T02:20:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Culmenus | null | Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2 | 3 | null | transformers | 20,602 | Entry not found |
DJSammy/bert-base-swedish-uncased_BotXO-ai | 725478697a1a07f31063cd26c8f955f64538abfe | 2020-10-25T03:42:06.000Z | [
"pytorch",
"transformers"
] | null | false | DJSammy | null | DJSammy/bert-base-swedish-uncased_BotXO-ai | 3 | null | transformers | 20,603 | Entry not found |
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda | 0e7e407ca4613e493145c52979555f86bfa5b442 | 2021-06-15T20:11:29.000Z | [
"pytorch",
"bert",
"fill-mask",
"rw",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda | 3 | null | transformers | 20,604 | Hugging Face's logo
---
language: rw
datasets:
---
# bert-base-multilingual-cased-finetuned-kinyarwanda
## Model description
**bert-base-multilingual-cased-finetuned-kinyarwanda** is a **Kinyarwanda BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Kinyarwanda language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu [MASK] hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | rw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 72.20 | 77.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Declan/Breitbart_model_v2 | 44d6d7eec8b7377b8b4c8d810bae74d32edfe9b3 | 2021-12-12T04:21:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v2 | 3 | null | transformers | 20,605 | Entry not found |
Declan/Breitbart_model_v5 | 7facb53fdf822f9e16f287a94034fc16865fb9d9 | 2021-12-15T06:54:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v5 | 3 | null | transformers | 20,606 | Entry not found |
Declan/Breitbart_model_v8 | ac019e3afa7b0ecf31a145c57f19af7ef9e7e96b | 2021-12-19T21:00:34.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Breitbart_model_v8 | 3 | null | transformers | 20,607 | Entry not found |
Declan/CNN_model_v4 | f53fb9061be1b9cfa4adfa726a69f4dfbaa623a7 | 2021-12-15T12:30:03.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/CNN_model_v4 | 3 | null | transformers | 20,608 | Entry not found |
Declan/CNN_model_v5 | fcb59084ad3ee2ae125867b5e52ac8dbc0ef28b3 | 2021-12-15T13:11:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/CNN_model_v5 | 3 | null | transformers | 20,609 | Entry not found |
Declan/FoxNews_model_v4 | d7af62234c0e9766828facbf96f5f1da5636faad | 2021-12-15T15:17:50.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v4 | 3 | null | transformers | 20,610 | Entry not found |
Declan/FoxNews_model_v6 | ecabd1119f31b7954c6c0e95915b34cdf01f6532 | 2021-12-19T12:03:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v6 | 3 | null | transformers | 20,611 | Entry not found |
Declan/FoxNews_model_v8 | 902408c9312e23a08f208b69a259e792fcfcc54f | 2021-12-19T22:28:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v8 | 3 | null | transformers | 20,612 | Entry not found |
Declan/Politico_model_v2 | bf5c49721e2ad75db458df7e71180a588fd847bb | 2021-12-16T05:05:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v2 | 3 | null | transformers | 20,613 | Entry not found |
Declan/Reuters_model_v1 | 27fee32dbb0f0e4154e93859f1d733dfb10fa676 | 2021-12-14T20:08:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Reuters_model_v1 | 3 | null | transformers | 20,614 | Entry not found |
Declan/WallStreetJournal_model_v2 | 7c67b52f4b474928f0c17f3efbe1ce76bcc9bc59 | 2021-12-17T23:19:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/WallStreetJournal_model_v2 | 3 | null | transformers | 20,615 | Entry not found |
Declan/WallStreetJournal_model_v3 | 5096400b17e35dad4c4daa8c8d8cc25983ca9a5f | 2021-12-18T00:14:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/WallStreetJournal_model_v3 | 3 | null | transformers | 20,616 | Entry not found |
DeltaHub/adapter_t5-3b_cola | ab879b0c2c749255fcf5c864810d82b483a1b3f6 | 2022-02-09T13:46:34.000Z | [
"pytorch",
"transformers"
] | null | false | DeltaHub | null | DeltaHub/adapter_t5-3b_cola | 3 | null | transformers | 20,617 | Entry not found |
DeskDown/MarianMixFT_en-hi | 9e0244109a2836b1b4b58e78d08e93405a6c8da0 | 2022-01-14T23:57:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | DeskDown | null | DeskDown/MarianMixFT_en-hi | 3 | null | transformers | 20,618 | Entry not found |
DeskDown/MarianMixFT_en-vi | 4ea0a16201d5cd31fb0783b3d43164188e3d9f71 | 2022-01-14T22:59:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | DeskDown | null | DeskDown/MarianMixFT_en-vi | 3 | null | transformers | 20,619 | Entry not found |
Dipl0/test_paraphrase_fr | 58c93d2515b9c88705b98d8e73195094da8b1b37 | 2021-08-29T16:12:18.000Z | [
"pytorch"
] | null | false | Dipl0 | null | Dipl0/test_paraphrase_fr | 3 | null | null | 20,620 | Entry not found |
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 | 897581648c0cbd56d18c8b28f8aa5088aecd5935 | 2022-03-24T11:56:50.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 | 3 | null | transformers | 20,621 | ---
language:
- hsb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hsb
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-hsb-v3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hsb
metrics:
- name: Test WER
type: wer
value: 0.4763681592039801
- name: Test CER
type: cer
value: 0.11194945177476305
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hsb
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hsb-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6549
- Wer: 0.4827
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00045
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8951 | 3.23 | 100 | 3.6396 | 1.0 |
| 3.314 | 6.45 | 200 | 3.2331 | 1.0 |
| 3.1931 | 9.68 | 300 | 3.0947 | 0.9906 |
| 1.7079 | 12.9 | 400 | 0.8865 | 0.8499 |
| 0.6859 | 16.13 | 500 | 0.7994 | 0.7529 |
| 0.4804 | 19.35 | 600 | 0.7783 | 0.7069 |
| 0.3506 | 22.58 | 700 | 0.6904 | 0.6321 |
| 0.2695 | 25.81 | 800 | 0.6519 | 0.5926 |
| 0.222 | 29.03 | 900 | 0.7041 | 0.5720 |
| 0.1828 | 32.26 | 1000 | 0.6608 | 0.5513 |
| 0.1474 | 35.48 | 1100 | 0.7129 | 0.5319 |
| 0.1269 | 38.71 | 1200 | 0.6664 | 0.5056 |
| 0.1077 | 41.94 | 1300 | 0.6712 | 0.4942 |
| 0.0934 | 45.16 | 1400 | 0.6467 | 0.4879 |
| 0.0819 | 48.39 | 1500 | 0.6549 | 0.4827 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 | 24ad8b97d33c46f2efce28177aa7a384b6470c51 | 2022-03-24T11:54:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 | 3 | null | transformers | 20,622 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- mr
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-mr-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mr
metrics:
- name: Test WER
type: wer
value: 0.49378259125551544
- name: Test CER
type: cer
value: 0.12470799640610962
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mr
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8729
- Wer: 0.4942
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1
Note: Marathi language not found in speech-recognition-community-v2/dev_data!
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000333
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.4934 | 9.09 | 200 | 3.7326 | 1.0 |
| 3.4234 | 18.18 | 400 | 3.3383 | 0.9996 |
| 3.2628 | 27.27 | 600 | 2.7482 | 0.9992 |
| 1.7743 | 36.36 | 800 | 0.6755 | 0.6787 |
| 1.0346 | 45.45 | 1000 | 0.6067 | 0.6193 |
| 0.8137 | 54.55 | 1200 | 0.6228 | 0.5612 |
| 0.6637 | 63.64 | 1400 | 0.5976 | 0.5495 |
| 0.5563 | 72.73 | 1600 | 0.7009 | 0.5383 |
| 0.4844 | 81.82 | 1800 | 0.6662 | 0.5287 |
| 0.4057 | 90.91 | 2000 | 0.6911 | 0.5303 |
| 0.3582 | 100.0 | 2200 | 0.7207 | 0.5327 |
| 0.3163 | 109.09 | 2400 | 0.7107 | 0.5118 |
| 0.2761 | 118.18 | 2600 | 0.7538 | 0.5118 |
| 0.2415 | 127.27 | 2800 | 0.7850 | 0.5178 |
| 0.2127 | 136.36 | 3000 | 0.8016 | 0.5034 |
| 0.1873 | 145.45 | 3200 | 0.8302 | 0.5187 |
| 0.1723 | 154.55 | 3400 | 0.9085 | 0.5223 |
| 0.1498 | 163.64 | 3600 | 0.8396 | 0.5126 |
| 0.1425 | 172.73 | 3800 | 0.8776 | 0.5094 |
| 0.1258 | 181.82 | 4000 | 0.8651 | 0.5014 |
| 0.117 | 190.91 | 4200 | 0.8772 | 0.4970 |
| 0.1093 | 200.0 | 4400 | 0.8729 | 0.4942 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 | 255273d160eb27ccf5114a81d72916e3588d060b | 2022-03-24T11:57:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 | 3 | null | transformers | 20,623 | ---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- pa-IN
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-pa-IN-r5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pa-IN
metrics:
- name: Test WER
type: wer
value: 0.4186593492747942
- name: Test CER
type: cer
value: 0.13301322550753938
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pa-IN
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8881
- Wer: 0.4175
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Punjabi language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 10.695 | 18.52 | 500 | 3.5681 | 1.0 |
| 3.2718 | 37.04 | 1000 | 2.3081 | 0.9643 |
| 0.8727 | 55.56 | 1500 | 0.7227 | 0.5147 |
| 0.3349 | 74.07 | 2000 | 0.7498 | 0.4959 |
| 0.2134 | 92.59 | 2500 | 0.7779 | 0.4720 |
| 0.1445 | 111.11 | 3000 | 0.8120 | 0.4594 |
| 0.1057 | 129.63 | 3500 | 0.8225 | 0.4610 |
| 0.0826 | 148.15 | 4000 | 0.8307 | 0.4351 |
| 0.0639 | 166.67 | 4500 | 0.8967 | 0.4316 |
| 0.0528 | 185.19 | 5000 | 0.8875 | 0.4238 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-myv-a1 | 59ffc43f72eb1b17da36b326b42937154805dc05 | 2022-03-24T11:57:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"myv",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-xls-r-myv-a1 | 3 | null | transformers | 20,624 | ---
language:
- myv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- myv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-myv-a1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: myv
metrics:
- name: Test WER
type: wer
value: 0.6514672686230248
- name: Test CER
type: cer
value: 0.17226131905088124
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0356
- Wer: 0.6524
### Evaluation Commands
**1. To evaluate on mozilla-foundation/common_voice_8_0 with test split**
python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
**2. To evaluate on speech-recognition-community-v2/dev_data**
Erzya language not found in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 5.649 | 9.62 | 500 | 3.0038 | 1.0 |
| 1.6272 | 19.23 | 1000 | 0.7362 | 0.7819 |
| 1.1354 | 28.85 | 1500 | 0.6410 | 0.7111 |
| 1.0424 | 38.46 | 2000 | 0.6907 | 0.7431 |
| 0.9293 | 48.08 | 2500 | 0.7249 | 0.7102 |
| 0.8246 | 57.69 | 3000 | 0.7422 | 0.6966 |
| 0.7837 | 67.31 | 3500 | 0.7413 | 0.6813 |
| 0.7147 | 76.92 | 4000 | 0.7873 | 0.6930 |
| 0.6276 | 86.54 | 4500 | 0.8038 | 0.6677 |
| 0.6041 | 96.15 | 5000 | 0.8240 | 0.6831 |
| 0.5336 | 105.77 | 5500 | 0.8748 | 0.6749 |
| 0.4705 | 115.38 | 6000 | 0.9006 | 0.6497 |
| 0.43 | 125.0 | 6500 | 0.8954 | 0.6551 |
| 0.3859 | 134.62 | 7000 | 0.9074 | 0.6614 |
| 0.3342 | 144.23 | 7500 | 0.9693 | 0.6560 |
| 0.3155 | 153.85 | 8000 | 1.0073 | 0.6691 |
| 0.2673 | 163.46 | 8500 | 1.0170 | 0.6632 |
| 0.2409 | 173.08 | 9000 | 1.0304 | 0.6709 |
| 0.2189 | 182.69 | 9500 | 0.9965 | 0.6546 |
| 0.1973 | 192.31 | 10000 | 1.0360 | 0.6551 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Evaluation Command
!python eval.py \
--model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 \
--dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs |
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1 | f12cae881030694a5e0fda141fde53ef17020c59 | 2022-02-05T21:58:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pa-IN",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-xls-r-pa-IN-a1 | 3 | null | transformers | 20,625 | ---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1508
- Wer: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 |
| 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 |
| 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 |
| 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 |
| 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 |
| 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 |
| 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 |
| 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 |
| 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 |
| 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Duugu/alexia-bot-test | 2f97f44ffa4de52075270de0221f9517492d3d35 | 2021-09-19T13:18:43.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | Duugu | null | Duugu/alexia-bot-test | 3 | null | transformers | 20,626 |
# Alexia Bot Testing |
EEE/DialoGPT-small-yoda | a3ecf50119d57d94b0d93dd18dbacd6474da0a75 | 2021-09-22T11:07:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | EEE | null | EEE/DialoGPT-small-yoda | 3 | null | transformers | 20,627 | ---
tags:
- conversational
---
# Yoda DialoGPT Model |
Ebtihal/AraBertMo_base_V4 | efdc62dbb58b3e4e04c96df0bcdb74b727f251e7 | 2022-03-15T19:13:24.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"dataset:OSCAR",
"transformers",
"Fill-Mask",
"autotrain_compatible"
] | fill-mask | false | Ebtihal | null | Ebtihal/AraBertMo_base_V4 | 3 | null | transformers | 20,628 | ---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V4' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 4 | 64 | 2500 | 5h 10m 20s | 7.6544 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V4")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V4")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
Ebtihal/AraDiaBERT_V3 | 7a609a1f38bfec4571bd589cb522f3753e5df51f | 2021-10-30T08:44:38.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
] | text-generation | false | Ebtihal | null | Ebtihal/AraDiaBERT_V3 | 3 | null | transformers | 20,629 | Entry not found |
Ebtihal/Aurora | 1ec75513bf9344e280091e9667abe575e083628a | 2021-07-11T23:53:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Ebtihal | null | Ebtihal/Aurora | 3 | null | transformers | 20,630 | Entry not found |
Einmalumdiewelt/PegasusXSUM_GNAD | 8e7dc2eeebfb56524a9f007233bb96114a6c4fff | 2022-01-12T21:51:11.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Einmalumdiewelt | null | Einmalumdiewelt/PegasusXSUM_GNAD | 3 | null | transformers | 20,631 | Entry not found |
Eunooeh/test | 6e76518b86568aed96a61927f30e2b83f9d0739c | 2021-09-30T06:36:14.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
] | null | false | Eunooeh | null | Eunooeh/test | 3 | null | transformers | 20,632 | Entry not found |
Fidlobabovic/beta-kvantorium-small | acfb6ee8589591d6f5d2937137636b2691afbe7c | 2021-05-20T11:50:54.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Fidlobabovic | null | Fidlobabovic/beta-kvantorium-small | 3 | null | transformers | 20,633 | Beta-kavntorium-simple-small is a transformers model RoBerta pretrained on a large corpus of Russion kvantorim data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with objective:
Automate communication with the Quantorium community and mentors.
https://sun9-49.userapi.com/impg/CIJZKA_r9xoLYd47Lvjv_8jyu6epadPyergP3Q/zw3J_E6IlJo.jpg?size=546x385&quality=96&sign=139fa29b864d36958feab4731cc684dc&type=album |
Filosofas/DialoGPT-medium-PALPATINE2 | 33bea9fb02eb6aa26d0943c20817d5bcbe2f095e | 2022-01-16T16:06:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Filosofas | null | Filosofas/DialoGPT-medium-PALPATINE2 | 3 | null | transformers | 20,634 | ---
tags:
- conversational
---
# PALPATINE DialoGPT Model |
Firat/albert-base-v2-finetuned-squad | b069240bc30bdd0d6d2126fa5274d75d8a4e1f84 | 2022-01-11T09:15:49.000Z | [
"pytorch",
"albert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Firat | null | Firat/albert-base-v2-finetuned-squad | 3 | null | transformers | 20,635 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8584 | 1.0 | 5540 | 0.9056 |
| 0.6473 | 2.0 | 11080 | 0.8975 |
| 0.4801 | 3.0 | 16620 | 0.9901 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Firat/roberta-base-finetuned-squad | 19505872759531fd835455069fa3ae50175907dd | 2022-01-09T22:12:48.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | Firat | null | Firat/roberta-base-finetuned-squad | 3 | null | transformers | 20,636 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8926 | 1.0 | 5536 | 0.8694 |
| 0.6821 | 2.0 | 11072 | 0.8428 |
| 0.5335 | 3.0 | 16608 | 0.8953 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
For/sheldonbot | 8e6de275bbf08d6e8ff7400adb97d9eb2eef21bf | 2021-06-02T15:54:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | For | null | For/sheldonbot | 3 | null | transformers | 20,637 | ---
tags:
- conversational
---
#
|
FranzStrauss/ponet-base-uncased | 3b4adf28ad56c7ac6e866bbf75157d8e09803208 | 2021-12-31T17:14:32.000Z | [
"pytorch",
"ponet",
"transformers"
] | null | false | FranzStrauss | null | FranzStrauss/ponet-base-uncased | 3 | null | transformers | 20,638 | Entry not found |
GKLMIP/bert-khmer-small-uncased-tokenized | c942251fb0d99c8332584818b22d5d592f717cee | 2021-07-31T04:53:16.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-khmer-small-uncased-tokenized | 3 | null | transformers | 20,639 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
GKLMIP/electra-khmer-base-uncased-tokenized | 2b962eb40b590036fde783fad4bb367b89d9d0fd | 2021-07-31T05:22:04.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/electra-khmer-base-uncased-tokenized | 3 | null | transformers | 20,640 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
GKLMIP/electra-laos-base-uncased | acaee663087c8b01837470a659deaa3c61ddabfa | 2021-07-31T06:21:25.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | GKLMIP | null | GKLMIP/electra-laos-base-uncased | 3 | null | transformers | 20,641 | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
|
GKLMIP/roberta-tagalog-base | 1a7c598beadeb531a960b9168fffd6eaf4d02a34 | 2021-07-31T02:43:47.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/roberta-tagalog-base | 3 | null | transformers | 20,642 | https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GPL/fiqa-msmarco-distilbert-gpl | c1f52f88093115d7246ff6cf79d9308b4bca549b | 2022-04-19T15:17:19.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/fiqa-msmarco-distilbert-gpl | 3 | null | sentence-transformers | 20,643 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/robust04-msmarco-distilbert-gpl | c99e7404b2982af4df2640edd83bab5bd576743e | 2022-04-19T15:19:47.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/robust04-msmarco-distilbert-gpl | 3 | null | sentence-transformers | 20,644 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
GPL/scifact-msmarco-distilbert-gpl | 1ddc5b4a5bc7c2f12b74b23d2fab95f77e9be84c | 2022-04-19T15:17:48.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/scifact-msmarco-distilbert-gpl | 3 | 1 | sentence-transformers | 20,645 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Galaxy/DialoGPT-small-hermoine | e24fb272a1335c49b84bae1c63ef2526021038fe | 2021-08-28T07:25:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Galaxy | null | Galaxy/DialoGPT-small-hermoine | 3 | null | transformers | 20,646 | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Gantenbein/ADDI-DE-XLM-R | 33a77f7de1b1ec06bc45b3e1b0c6eea815eddf14 | 2021-06-01T14:31:33.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gantenbein | null | Gantenbein/ADDI-DE-XLM-R | 3 | null | transformers | 20,647 | Entry not found |
Gantenbein/ADDI-FI-RoBERTa | c6bfbcd92d62f52e40a343338ac687662b1ee48b | 2021-06-01T14:12:02.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Gantenbein | null | Gantenbein/ADDI-FI-RoBERTa | 3 | null | transformers | 20,648 | Entry not found |
Gastron/lp-initial-aed-short | f6bcbf440e8d5fddf6048873c1d86659797b1111 | 2021-12-03T10:00:50.000Z | [
"fi",
"speechbrain",
"automatic-speech-recognition",
"Attention",
"pytorch"
] | automatic-speech-recognition | false | Gastron | null | Gastron/lp-initial-aed-short | 3 | null | speechbrain | 20,649 | ---
language: "fi"
thumbnail:
tags:
- automatic-speech-recognition
- Attention
- pytorch
- speechbrain
metrics:
- wer
- cer
---
# CRDNN with Attention trained on LP
This is a an initial model, partly wrong configuration, just to show an initial example.
|
Geotrend/bert-base-da-cased | a68ad651af408fae9349a406d62b820891f2d6bf | 2021-05-18T18:49:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"da",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-da-cased | 3 | null | transformers | 20,650 | ---
language: da
datasets: wikipedia
license: apache-2.0
---
# bert-base-da-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-da-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-da-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-es-it-cased | c0d369350481147c96f706d92be0e5700deb6c0b | 2021-05-18T19:10:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-es-it-cased | 3 | null | transformers | 20,651 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-es-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-fr-ar-cased | 2926cd5ced09e30f61c7d73b0d3f533698d18321 | 2021-05-18T19:14:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-ar-cased | 3 | null | transformers | 20,652 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-fr-es-pt-it-cased | f32fe24217a830ae9b070749f14a552a6de2da23 | 2021-05-18T19:23:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-es-pt-it-cased | 3 | null | transformers | 20,653 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-pt-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-pt-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-nl-cased | ad460f8547b40117cb14f8aaaf471f7b51d03ade | 2021-05-18T19:39:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-nl-cased | 3 | null | transformers | 20,654 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-nl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-nl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-nl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-ro-cased | f1e6a3e2aaaf5dc1c52e1e56cf2c67f0ecded727 | 2021-05-18T19:43:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-ro-cased | 3 | null | transformers | 20,655 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-ro-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ro-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-lt-cased | 8c670dd895836aa0e09cf1c844a49144d41d242c | 2021-05-18T20:01:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"lt",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-lt-cased | 3 | null | transformers | 20,656 | ---
language: lt
datasets: wikipedia
license: apache-2.0
---
# bert-base-lt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-lt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-lt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-ar-cased | 37c654632d62be99defae7848e3dadfa6ac49e85 | 2021-08-16T14:07:17.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-ar-cased | 3 | null | transformers | 20,657 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-ar-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ar-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-el-cased | eb9a8c94e59a501e1baab2be7f7061bad330de10 | 2021-08-16T14:00:28.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-el-cased | 3 | null | transformers | 20,658 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-el-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-el-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-el-ru-cased | e92f396137d506cd25294cfc1d4b5941468b6112 | 2021-07-29T13:00:03.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-el-ru-cased | 3 | null | transformers | 20,659 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-el-ru-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-el-ru-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-el-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-de-cased | bfa19a29c7441c40c428fec1e06afe3aed1615ff | 2021-07-28T00:20:23.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-de-cased | 3 | null | transformers | 20,660 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-de-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-de-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-es-cased | 349356d0687847abc5fcc2bcd6e6f69c9b10dcce | 2021-07-27T23:19:52.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-es-cased | 3 | null | transformers | 20,661 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-es-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-es-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-lt-no-pl-cased | 0f9577d74de6f9aae5f8b704ea573904cc11418c | 2021-07-28T13:08:26.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-lt-no-pl-cased | 3 | null | transformers | 20,662 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-lt-no-pl-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-lt-no-pl-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-lt-no-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-nl-ru-ar-cased | 3d70a3cd22fa81ab89cdffe92cac2b3f57a029f6 | 2021-07-28T15:56:18.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-nl-ru-ar-cased | 3 | null | transformers | 20,663 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-nl-ru-ar-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-nl-ru-ar-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-nl-ru-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-ro-cased | 50842e98d825f301fed5c996373e4789ff606570 | 2021-07-28T22:35:06.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"ro",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-ro-cased | 3 | null | transformers | 20,664 | ---
language: ro
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ro-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ro-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ro-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-sw-cased | eba526e9f94c737a208fd4389ba615f46229de08 | 2021-08-16T13:29:45.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"sw",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-sw-cased | 3 | null | transformers | 20,665 | ---
language: sw
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-sw-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-sw-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-uk-cased | 096f70b5e5880d25f54781543e3187b086854253 | 2021-07-29T16:43:45.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"uk",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-uk-cased | 3 | 1 | transformers | 20,666 | ---
language: uk
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-uk-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-uk-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-uk-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
GusNicho/bert-base-cased-finetuned | cb701b1486822a8b695c90957df6523bad090cce | 2022-01-12T07:38:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GusNicho | null | GusNicho/bert-base-cased-finetuned | 3 | null | transformers | 20,667 | Entry not found |
HarrisDePerceptron/xls-r-1b-ur | 707424029d900f78cb666b82b05dabfe3ca296e3 | 2022-03-24T11:57:20.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | HarrisDePerceptron | null | HarrisDePerceptron/xls-r-1b-ur | 3 | null | transformers | 20,668 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ur
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- name: Test WER
type: wer
value: 44.13
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9613
- Wer: 0.5376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3118 | 1.96 | 100 | 2.9093 | 0.9982 |
| 2.2071 | 3.92 | 200 | 1.1737 | 0.7779 |
| 1.6098 | 5.88 | 300 | 0.9984 | 0.7015 |
| 1.4333 | 7.84 | 400 | 0.9800 | 0.6705 |
| 1.2859 | 9.8 | 500 | 0.9582 | 0.6487 |
| 1.2073 | 11.76 | 600 | 0.8841 | 0.6077 |
| 1.1417 | 13.73 | 700 | 0.9118 | 0.6343 |
| 1.0988 | 15.69 | 800 | 0.9217 | 0.6196 |
| 1.0279 | 17.65 | 900 | 0.9165 | 0.5867 |
| 0.9765 | 19.61 | 1000 | 0.9306 | 0.5978 |
| 0.9161 | 21.57 | 1100 | 0.9305 | 0.5768 |
| 0.8395 | 23.53 | 1200 | 0.9828 | 0.5819 |
| 0.8306 | 25.49 | 1300 | 0.9397 | 0.5760 |
| 0.7819 | 27.45 | 1400 | 0.9544 | 0.5742 |
| 0.7509 | 29.41 | 1500 | 0.9278 | 0.5690 |
| 0.7218 | 31.37 | 1600 | 0.9003 | 0.5587 |
| 0.6725 | 33.33 | 1700 | 0.9659 | 0.5554 |
| 0.6287 | 35.29 | 1800 | 0.9522 | 0.5561 |
| 0.6077 | 37.25 | 1900 | 0.9154 | 0.5465 |
| 0.5873 | 39.22 | 2000 | 0.9331 | 0.5469 |
| 0.5621 | 41.18 | 2100 | 0.9335 | 0.5491 |
| 0.5168 | 43.14 | 2200 | 0.9632 | 0.5458 |
| 0.5114 | 45.1 | 2300 | 0.9349 | 0.5387 |
| 0.4986 | 47.06 | 2400 | 0.9364 | 0.5380 |
| 0.4761 | 49.02 | 2500 | 0.9584 | 0.5391 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HarrisDePerceptron/xls-r-300m-ur-cv7 | 2ded961021fa15ac18847fc0617416e8ba1207c2 | 2022-02-05T11:21:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | HarrisDePerceptron | null | HarrisDePerceptron/xls-r-300m-ur-cv7 | 3 | null | transformers | 20,669 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2924
- Wer: 0.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.2783 | 4.17 | 100 | 4.6409 | 1.0 |
| 3.5578 | 8.33 | 200 | 3.1649 | 1.0 |
| 3.1279 | 12.5 | 300 | 3.0335 | 1.0 |
| 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 |
| 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 |
| 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 |
| 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 |
| 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 |
| 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 |
| 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 |
| 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 |
| 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 |
| 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 |
| 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 |
| 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 |
| 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 |
| 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 |
| 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 |
| 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 |
| 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 |
| 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 |
| 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 |
| 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 |
| 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 |
| 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 |
| 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 |
| 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 |
| 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 |
| 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 |
| 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 |
| 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 |
| 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 |
| 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 |
| 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 |
| 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 |
| 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 |
| 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 |
| 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 |
| 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 |
| 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 |
| 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 |
| 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 |
| 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 |
| 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 |
| 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 |
| 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 |
| 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 |
| 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HarrisDePerceptron/xlsr-large-53-ur | 6091035715b54c219924e845a82cf48491a3b524 | 2022-03-24T11:54:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | HarrisDePerceptron | null | HarrisDePerceptron/xlsr-large-53-ur | 3 | null | transformers | 20,670 | ---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ur
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: ''
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ur
metrics:
- name: Test WER
type: wer
value: 62.47
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8888
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.1224 | 1.96 | 100 | 3.5429 | 1.0 |
| 3.2411 | 3.92 | 200 | 3.1786 | 1.0 |
| 3.1283 | 5.88 | 300 | 3.0571 | 1.0 |
| 3.0044 | 7.84 | 400 | 2.9560 | 0.9996 |
| 2.9388 | 9.8 | 500 | 2.8977 | 1.0011 |
| 2.86 | 11.76 | 600 | 2.6944 | 0.9952 |
| 2.5538 | 13.73 | 700 | 2.0967 | 0.9435 |
| 2.1214 | 15.69 | 800 | 1.4816 | 0.8428 |
| 1.8136 | 17.65 | 900 | 1.2459 | 0.8048 |
| 1.6795 | 19.61 | 1000 | 1.1232 | 0.7649 |
| 1.5571 | 21.57 | 1100 | 1.0510 | 0.7432 |
| 1.4975 | 23.53 | 1200 | 1.0298 | 0.6963 |
| 1.4485 | 25.49 | 1300 | 0.9775 | 0.7074 |
| 1.3924 | 27.45 | 1400 | 0.9798 | 0.6956 |
| 1.3604 | 29.41 | 1500 | 0.9345 | 0.7092 |
| 1.3224 | 31.37 | 1600 | 0.9535 | 0.6830 |
| 1.2816 | 33.33 | 1700 | 0.9178 | 0.6679 |
| 1.2623 | 35.29 | 1800 | 0.9249 | 0.6679 |
| 1.2421 | 37.25 | 1900 | 0.9124 | 0.6734 |
| 1.2208 | 39.22 | 2000 | 0.8962 | 0.6664 |
| 1.2145 | 41.18 | 2100 | 0.8903 | 0.6734 |
| 1.1888 | 43.14 | 2200 | 0.8883 | 0.6708 |
| 1.1933 | 45.1 | 2300 | 0.8928 | 0.6723 |
| 1.1838 | 47.06 | 2400 | 0.8868 | 0.6679 |
| 1.1634 | 49.02 | 2500 | 0.8886 | 0.6657 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Harveenchadha/vakyansh-wav2vec2-nepali-nem-130 | 002c6200bcdacd8558e696583361f47e8154df67 | 2021-08-02T18:55:50.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-nepali-nem-130 | 3 | null | transformers | 20,671 | Entry not found |
Helsinki-NLP/opus-mt-bi-sv | fa443f611486bd359dee28a2ef896a03ca81e515 | 2021-09-09T21:27:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bi",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bi-sv | 3 | null | transformers | 20,672 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-sv
* source languages: bi
* target languages: sv
* OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.sv | 22.7 | 0.403 |
|
Helsinki-NLP/opus-mt-csg-es | 9742b7a5ed07cb69c4051567686b2e1ace50b061 | 2021-09-09T21:29:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"csg",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-csg-es | 3 | null | transformers | 20,673 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-csg-es
* source languages: csg
* target languages: es
* OPUS readme: [csg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/csg-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/csg-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.csg.es | 93.1 | 0.952 |
|
Helsinki-NLP/opus-mt-de-bcl | 628737ef8907e7d2db7989660f413420cfad41f5 | 2021-09-09T21:30:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"bcl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-bcl | 3 | null | transformers | 20,674 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-bcl
* source languages: de
* target languages: bcl
* OPUS readme: [de-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bcl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bcl | 34.6 | 0.563 |
|
Helsinki-NLP/opus-mt-de-eu | 20b23b953fb829fa7aa146ed7b8026ec476a7ba3 | 2021-01-18T07:59:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"eu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-eu | 3 | 1 | transformers | 20,675 | ---
language:
- de
- eu
tags:
- translation
license: apache-2.0
---
### deu-eus
* source group: German
* target group: Basque
* OPUS readme: [deu-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-eus/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.eus | 31.8 | 0.574 |
### System Info:
- hf_name: deu-eus
- source_languages: deu
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'eu']
- src_constituents: {'deu'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-eus/opus-2020-06-16.test.txt
- src_alpha3: deu
- tgt_alpha3: eus
- short_pair: de-eu
- chrF2_score: 0.574
- bleu: 31.8
- brevity_penalty: 0.9209999999999999
- ref_len: 2829.0
- src_name: German
- tgt_name: Basque
- train_date: 2020-06-16
- src_alpha2: de
- tgt_alpha2: eu
- prefer_old: False
- long_pair: deu-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-el-ar | 5d93e0361f1a6252e203323ad8eb434f7784d3cd | 2021-01-18T08:03:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"el",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-el-ar | 3 | null | transformers | 20,676 | ---
language:
- el
- ar
tags:
- translation
license: apache-2.0
---
### ell-ara
* source group: Modern Greek (1453-)
* target group: Arabic
* OPUS readme: [ell-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md)
* model: transformer
* source language(s): ell
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.ara | 21.9 | 0.485 |
### System Info:
- hf_name: ell-ara
- source_languages: ell
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'ar']
- src_constituents: {'ell'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt
- src_alpha3: ell
- tgt_alpha3: ara
- short_pair: el-ar
- chrF2_score: 0.485
- bleu: 21.9
- brevity_penalty: 0.972
- ref_len: 1686.0
- src_name: Modern Greek (1453-)
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: el
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ell-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-cpf | 60bc6ee533bef83beea6cf16b4aa7e72fbe4fe46 | 2021-01-18T08:06:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ht",
"cpf",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-cpf | 3 | null | transformers | 20,677 | ---
language:
- en
- ht
- cpf
tags:
- translation
license: apache-2.0
---
### eng-cpf
* source group: English
* target group: Creoles and pidgins, French‑based
* OPUS readme: [eng-cpf](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md)
* model: transformer
* source language(s): eng
* target language(s): gcf_Latn hat mfe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-gcf.eng.gcf | 6.2 | 0.262 |
| Tatoeba-test.eng-hat.eng.hat | 25.7 | 0.451 |
| Tatoeba-test.eng-mfe.eng.mfe | 80.1 | 0.900 |
| Tatoeba-test.eng.multi | 15.9 | 0.354 |
### System Info:
- hf_name: eng-cpf
- source_languages: eng
- target_languages: cpf
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ht', 'cpf']
- src_constituents: {'eng'}
- tgt_constituents: {'gcf_Latn', 'hat', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: cpf
- short_pair: en-cpf
- chrF2_score: 0.354
- bleu: 15.9
- brevity_penalty: 1.0
- ref_len: 1012.0
- src_name: English
- tgt_name: Creoles and pidgins, French‑based
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: cpf
- prefer_old: False
- long_pair: eng-cpf
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-sit | 98e54cb04296640b18d593f1e883afbf75e1f8b7 | 2021-01-18T08:15:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sit",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sit | 3 | null | transformers | 20,678 | ---
language:
- en
- sit
tags:
- translation
license: apache-2.0
---
### eng-sit
* source group: English
* target group: Sino-Tibetan languages
* OPUS readme: [eng-sit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md)
* model: transformer
* source language(s): eng
* target language(s): bod brx brx_Latn cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans mya nan wuu yue yue_Hans yue_Hant zho zho_Hans zho_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enzh-engzho.eng.zho | 23.5 | 0.217 |
| newstest2017-enzh-engzho.eng.zho | 23.2 | 0.223 |
| newstest2018-enzh-engzho.eng.zho | 25.0 | 0.230 |
| newstest2019-enzh-engzho.eng.zho | 20.2 | 0.225 |
| Tatoeba-test.eng-bod.eng.bod | 0.4 | 0.147 |
| Tatoeba-test.eng-brx.eng.brx | 0.5 | 0.012 |
| Tatoeba-test.eng.multi | 25.7 | 0.223 |
| Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.222 |
| Tatoeba-test.eng-zho.eng.zho | 29.2 | 0.249 |
### System Info:
- hf_name: eng-sit
- source_languages: eng
- target_languages: sit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sit']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sit
- short_pair: en-sit
- chrF2_score: 0.223
- bleu: 25.7
- brevity_penalty: 0.907
- ref_len: 109538.0
- src_name: English
- tgt_name: Sino-Tibetan languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sit
- prefer_old: False
- long_pair: eng-sit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-it | 7b68faa4a3fa2b61cee6b6440827d59cd09ba9c4 | 2021-01-18T08:20:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-it | 3 | null | transformers | 20,679 | ---
language:
- eo
- it
tags:
- translation
license: apache-2.0
---
### epo-ita
* source group: Esperanto
* target group: Italian
* OPUS readme: [epo-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ita | 23.8 | 0.465 |
### System Info:
- hf_name: epo-ita
- source_languages: epo
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'it']
- src_constituents: {'epo'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ita
- short_pair: eo-it
- chrF2_score: 0.465
- bleu: 23.8
- brevity_penalty: 0.9420000000000001
- ref_len: 67118.0
- src_name: Esperanto
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: it
- prefer_old: False
- long_pair: epo-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fi-st | d58c5159d40d662231c3d4de5318000f2f89ee34 | 2021-09-09T21:51:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"st",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-st | 3 | null | transformers | 20,680 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-st
* source languages: fi
* target languages: st
* OPUS readme: [fi-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.st | 37.1 | 0.570 |
|
Helsinki-NLP/opus-mt-fr-ca | e4851508d5f6cd47566501fb505f330b602098df | 2021-01-18T08:42:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ca | 3 | null | transformers | 20,681 | ---
language:
- fr
- ca
tags:
- translation
license: apache-2.0
---
### fra-cat
* source group: French
* target group: Catalan
* OPUS readme: [fra-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.cat | 43.4 | 0.645 |
### System Info:
- hf_name: fra-cat
- source_languages: fra
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ca']
- src_constituents: {'fra'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt
- src_alpha3: fra
- tgt_alpha3: cat
- short_pair: fr-ca
- chrF2_score: 0.645
- bleu: 43.4
- brevity_penalty: 0.982
- ref_len: 5214.0
- src_name: French
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: fr
- tgt_alpha2: ca
- prefer_old: False
- long_pair: fra-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-pis | 4d07a2598586a8619afae6e84f6fdb2473deee70 | 2021-09-09T21:56:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"pis",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-pis | 3 | null | transformers | 20,682 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-pis
* source languages: fr
* target languages: pis
* OPUS readme: [fr-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pis | 29.0 | 0.486 |
|
Helsinki-NLP/opus-mt-he-sv | 030c52039da6bc829fa4a7a1965c2ee76a8a08ea | 2021-09-09T22:09:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-sv | 3 | null | transformers | 20,683 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-he-sv
* source languages: he
* target languages: sv
* OPUS readme: [he-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.sv | 28.9 | 0.493 |
|
Helsinki-NLP/opus-mt-he-uk | 437fa60238af6e191fe1773da6a441cc9bb2a5cc | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-uk | 3 | null | transformers | 20,684 | ---
language:
- he
- uk
tags:
- translation
license: apache-2.0
---
### heb-ukr
* source group: Hebrew
* target group: Ukrainian
* OPUS readme: [heb-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md)
* model: transformer-align
* source language(s): heb
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ukr | 35.4 | 0.552 |
### System Info:
- hf_name: heb-ukr
- source_languages: heb
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'uk']
- src_constituents: {'heb'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt
- src_alpha3: heb
- tgt_alpha3: ukr
- short_pair: he-uk
- chrF2_score: 0.552
- bleu: 35.4
- brevity_penalty: 0.971
- ref_len: 5163.0
- src_name: Hebrew
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: he
- tgt_alpha2: uk
- prefer_old: False
- long_pair: heb-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-id-sv | 066633bdaab93489b24820fff325f8a7a2eed437 | 2021-09-09T22:11:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-id-sv | 3 | null | transformers | 20,685 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: [id-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.id.sv | 32.7 | 0.527 |
|
Helsinki-NLP/opus-mt-ig-es | 0e814965834648c1c738941f9f6378731802ce08 | 2021-09-09T22:11:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ig",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ig-es | 3 | null | transformers | 20,686 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ig-es
* source languages: ig
* target languages: es
* OPUS readme: [ig-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.es | 24.6 | 0.420 |
|
Helsinki-NLP/opus-mt-iso-sv | 0be45fee82b8c1409eb72710943ed7419fcb8413 | 2021-09-10T13:52:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"iso",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-iso-sv | 3 | null | transformers | 20,687 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-iso-sv
* source languages: iso
* target languages: sv
* OPUS readme: [iso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.sv | 25.0 | 0.430 |
|
Helsinki-NLP/opus-mt-it-eo | 9de5660471622e2a92c8bfe880d2f509e68dc9ce | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-eo | 3 | null | transformers | 20,688 | ---
language:
- it
- eo
tags:
- translation
license: apache-2.0
---
### ita-epo
* source group: Italian
* target group: Esperanto
* OPUS readme: [ita-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-epo/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.epo | 28.2 | 0.500 |
### System Info:
- hf_name: ita-epo
- source_languages: ita
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'eo']
- src_constituents: {'ita'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.test.txt
- src_alpha3: ita
- tgt_alpha3: epo
- short_pair: it-eo
- chrF2_score: 0.5
- bleu: 28.2
- brevity_penalty: 0.9570000000000001
- ref_len: 67846.0
- src_name: Italian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: it
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ita-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-kg-sv | abff720b6ffe340fe58b0bd5ddadf331a21c5cbe | 2021-09-10T13:53:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kg",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kg-sv | 3 | null | transformers | 20,689 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kg-sv
* source languages: kg
* target languages: sv
* OPUS readme: [kg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kg.sv | 26.3 | 0.440 |
|
Helsinki-NLP/opus-mt-kqn-sv | 00c5a65f59fd8f29c692036c3bd9ac288493f2df | 2021-09-10T13:54:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kqn",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kqn-sv | 3 | null | transformers | 20,690 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kqn-sv
* source languages: kqn
* target languages: sv
* OPUS readme: [kqn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kqn.sv | 23.3 | 0.409 |
|
Helsinki-NLP/opus-mt-lt-eo | 9417c8430e515f180602674d410b49ed99f0a134 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lt",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lt-eo | 3 | null | transformers | 20,691 | ---
language:
- lt
- eo
tags:
- translation
license: apache-2.0
---
### lit-epo
* source group: Lithuanian
* target group: Esperanto
* OPUS readme: [lit-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.epo | 13.0 | 0.313 |
### System Info:
- hf_name: lit-epo
- source_languages: lit
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'eo']
- src_constituents: {'lit'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt
- src_alpha3: lit
- tgt_alpha3: epo
- short_pair: lt-eo
- chrF2_score: 0.313
- bleu: 13.0
- brevity_penalty: 1.0
- ref_len: 70340.0
- src_name: Lithuanian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: lt
- tgt_alpha2: eo
- prefer_old: False
- long_pair: lit-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-lue-sv | fd9371fa847f74869b8504d543cc709c27890330 | 2021-09-10T13:56:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lue",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lue-sv | 3 | null | transformers | 20,692 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lue-sv
* source languages: lue
* target languages: sv
* OPUS readme: [lue-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.sv | 23.7 | 0.412 |
|
Helsinki-NLP/opus-mt-lus-fr | 3819159e58e4e0e7b7eaebe5a4b0084d5abf8ab7 | 2021-09-10T13:57:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lus",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lus-fr | 3 | null | transformers | 20,693 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lus-fr
* source languages: lus
* target languages: fr
* OPUS readme: [lus-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.fr | 25.5 | 0.423 |
|
Helsinki-NLP/opus-mt-mh-es | 848bbc2ceef41fbd20bd37775ee534aef26798c0 | 2021-09-10T13:57:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mh",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mh-es | 3 | null | transformers | 20,694 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mh-es
* source languages: mh
* target languages: es
* OPUS readme: [mh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.es | 23.6 | 0.407 |
|
Helsinki-NLP/opus-mt-ms-it | 0e002c3293f7bc5e66124490870b36b1c3a5ba15 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ms",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ms-it | 3 | null | transformers | 20,695 | ---
language:
- ms
- it
tags:
- translation
license: apache-2.0
---
### msa-ita
* source group: Malay (macrolanguage)
* target group: Italian
* OPUS readme: [msa-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md)
* model: transformer-align
* source language(s): ind zsm_Latn
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.ita | 37.8 | 0.613 |
### System Info:
- hf_name: msa-ita
- source_languages: msa
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms', 'it']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: ita
- short_pair: ms-it
- chrF2_score: 0.613
- bleu: 37.8
- brevity_penalty: 0.995
- ref_len: 2758.0
- src_name: Malay (macrolanguage)
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: it
- prefer_old: False
- long_pair: msa-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nso-fi | a091d627ace0ba1342fa65d98bd170073c1a7dcf | 2021-09-10T13:59:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nso",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nso-fi | 3 | null | transformers | 20,696 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
|
Helsinki-NLP/opus-mt-pl-eo | 7f02da4aac0e1292655f5f39c9cb7964e49e068e | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pl",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pl-eo | 3 | null | transformers | 20,697 | ---
language:
- pl
- eo
tags:
- translation
license: apache-2.0
---
### pol-epo
* source group: Polish
* target group: Esperanto
* OPUS readme: [pol-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.epo | 24.8 | 0.451 |
### System Info:
- hf_name: pol-epo
- source_languages: pol
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'eo']
- src_constituents: {'pol'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt
- src_alpha3: pol
- tgt_alpha3: epo
- short_pair: pl-eo
- chrF2_score: 0.451
- bleu: 24.8
- brevity_penalty: 0.9670000000000001
- ref_len: 17191.0
- src_name: Polish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: pl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: pol-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-pt-gl | 36fda1fd0a05c656bd1269e8c09fe26b6654e452 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pt",
"gl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pt-gl | 3 | null | transformers | 20,698 | ---
language:
- pt
- gl
tags:
- translation
license: apache-2.0
---
### por-glg
* source group: Portuguese
* target group: Galician
* OPUS readme: [por-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md)
* model: transformer-align
* source language(s): por
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.glg | 55.8 | 0.737 |
### System Info:
- hf_name: por-glg
- source_languages: por
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'gl']
- src_constituents: {'por'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt
- src_alpha3: por
- tgt_alpha3: glg
- short_pair: pt-gl
- chrF2_score: 0.737
- bleu: 55.8
- brevity_penalty: 0.996
- ref_len: 2989.0
- src_name: Portuguese
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: pt
- tgt_alpha2: gl
- prefer_old: False
- long_pair: por-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ru-eo | cdde46f679fed727208b0461b52bdbe5496f6091 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-eo | 3 | null | transformers | 20,699 | ---
language:
- ru
- eo
tags:
- translation
license: apache-2.0
---
### rus-epo
* source group: Russian
* target group: Esperanto
* OPUS readme: [rus-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.epo | 24.2 | 0.436 |
### System Info:
- hf_name: rus-epo
- source_languages: rus
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eo']
- src_constituents: {'rus'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: epo
- short_pair: ru-eo
- chrF2_score: 0.436
- bleu: 24.2
- brevity_penalty: 0.925
- ref_len: 77197.0
- src_name: Russian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eo
- prefer_old: False
- long_pair: rus-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.