modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BSC-TeMU/roberta-large-bne-capitel-pos | 6ddd4a469f2a48870891d043ed34abe962a9f16a | 2021-10-21T10:31:47.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | BSC-TeMU | null | BSC-TeMU/roberta-large-bne-capitel-pos | 26 | 3 | transformers | 7,500 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "pos"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
widget:
- text: "Festival de San Sebastián: Johnny Depp recibirá el premio Donostia en pleno rifirrafe judicial con Amber Heard"
- text: "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto."
- text: "Gracias a los datos de la BNE, se ha podido lograr este modelo del lenguaje."
- text: "El Tribunal Superior de Justicia se pronunció ayer: \"Hay base legal dentro del marco jurídico actual\"."
---
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-pos
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9851 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28 | 933447601c98ddcc59e5a79fe03d2a8d0e124d89 | 2022-02-03T15:17:21.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | BatuhanYilmaz | null | BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28 | 26 | null | transformers | 7,501 | ---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-ner | 36e7d767d339b7ed97e8861245db2aef8cb4aa03 | 2021-10-17T11:13:27.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-da-ner | 26 | null | transformers | 7,502 | ---
language:
- ar
license: apache-2.0
widget:
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
---
# CAMeLBERT-DA NER Model
## Model description
**CAMeLBERT-DA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Cameron/BERT-mdgender-convai-binary | beb052a4dc3e234ff1dc25d3e28820d69532d722 | 2021-05-18T17:30:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-mdgender-convai-binary | 26 | null | transformers | 7,503 | Entry not found |
CouchCat/ma_sa_v7_distil | 43770a92bed3d5bb5a8da1472eadb83dfd365006 | 2021-02-15T23:19:57.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"sentiment-analysis",
"license:mit"
] | text-classification | false | CouchCat | null | CouchCat/ma_sa_v7_distil | 26 | null | transformers | 7,504 | ---
language: en
license: mit
tags:
- sentiment-analysis
widget:
- text: "I am disappointed in the terrible quality of my dress"
---
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` |
Davlan/xlm-roberta-large-masakhaner | 36e6b01b4ebd3afc282e0ce198d0a04ddbfd58a8 | 2022-06-27T11:50:50.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"amh",
"hau",
"ibo",
"kin",
"lug",
"luo",
"pcm",
"swa",
"wol",
"yor",
"multilingual",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | false | Davlan | null | Davlan/xlm-roberta-large-masakhaner | 26 | null | transformers | 7,505 | Hugging Face's logo
---
language:
- amh
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-large-masakhaner
## Model description
**xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
amh |75.76
hau |91.75
ibo |86.26
kin |76.38
lug |84.64
luo |80.65
pcm |89.55
swa |89.48
wol |70.70
yor |82.05
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
DrMatters/rubert_cased | 58badf1655b5856f08b90eb14313fa4a3405ece9 | 2021-05-19T11:14:32.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | DrMatters | null | DrMatters/rubert_cased | 26 | null | transformers | 7,506 | Entry not found |
EleutherAI/enformer-191k_corr_coef_obj | 5c9d4159c1815c487b206367493033d113fa3eea | 2022-02-23T12:17:55.000Z | [
"pytorch",
"enformer",
"transformers",
"license:apache-2.0"
] | null | false | EleutherAI | null | EleutherAI/enformer-191k_corr_coef_obj | 26 | null | transformers | 7,507 | ---
license: apache-2.0
inference: false
---
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 196,608 basepairs, target length 896, with shift augmentation but without reverse complement, on poisson loss objective. Final human pearson R of ~0.49.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` |
Elron/bleurt-base-128 | 3dabe1a4ba7ca2041f5455262780ab797f0f7d0b | 2021-10-04T13:24:42.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Elron | null | Elron/bleurt-base-128 | 26 | 1 | transformers | 7,508 | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-base-128")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-base-128")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([0.3598, 0.0723])
```
|
HJK/PickupLineGenerator | 9f62120ac2b28ef67731c4e5d41073d09a02b560 | 2021-05-21T10:05:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | HJK | null | HJK/PickupLineGenerator | 26 | null | transformers | 7,509 | basically, it makes pickup lines
https://huggingface.co/gpt2
|
Helsinki-NLP/opus-mt-bem-en | 8175ad6e29c44d6aa61a3cc3e0cc6b89432be48e | 2021-09-09T21:27:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bem",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bem-en | 26 | null | transformers | 7,510 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-en
* source languages: bem
* target languages: en
* OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.en | 33.4 | 0.491 |
|
Helsinki-NLP/opus-mt-en-niu | f52877dc5488bf560017c19e65a545112d7a8ec8 | 2021-09-09T21:38:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"niu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-niu | 26 | null | transformers | 7,511 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-niu
* source languages: en
* target languages: niu
* OPUS readme: [en-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.niu | 53.0 | 0.698 |
|
Helsinki-NLP/opus-mt-es-hr | 27f3c1660c42cb2fc6267a557debf6cfbeaae583 | 2021-09-09T21:42:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"hr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-hr | 26 | null | transformers | 7,512 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-hr
* source languages: es
* target languages: hr
* OPUS readme: [es-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.hr | 21.7 | 0.459 |
|
Helsinki-NLP/opus-mt-ine-ine | 82a5f65abdd0e196b05112464ff3dd552d484283 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ine-ine | 26 | null | transformers | 7,513 | ---
language:
- ca
- es
- os
- ro
- fy
- cy
- sc
- is
- yi
- lb
- an
- sq
- fr
- ht
- rm
- ps
- af
- uk
- sl
- lt
- bg
- be
- gd
- si
- en
- br
- mk
- or
- mr
- ru
- fo
- co
- oc
- pl
- gl
- nb
- bn
- id
- hy
- da
- gv
- nl
- pt
- hi
- as
- kw
- ga
- sv
- gu
- wa
- lv
- el
- it
- hr
- ur
- nn
- de
- cs
- ine
tags:
- translation
license: apache-2.0
---
### ine-ine
* source group: Indo-European languages
* target group: Indo-European languages
* OPUS readme: [ine-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-ine/README.md)
* model: transformer
* source language(s): afr afr_Arab aln ang_Latn arg asm ast awa bel bel_Latn ben bho bjn bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell eng enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye hye_Latn ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Grek lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pcd pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus rus_Latn san_Deva scn sco sgs sin slv snd_Arab spa sqi srd srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* target language(s): afr afr_Arab aln ang_Latn arg asm ast awa bel bel_Latn ben bho bjn bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell eng enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye hye_Latn ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Grek lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pcd pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus rus_Latn san_Deva scn sco sgs sin slv snd_Arab spa sqi srd srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| euelections_dev2019.de-fr-deufra.deu.fra | 19.2 | 0.482 |
| euelections_dev2019.fr-de-fradeu.fra.deu | 15.8 | 0.470 |
| newsdev2014-enghin.eng.hin | 4.0 | 0.245 |
| newsdev2014-hineng.hin.eng | 6.8 | 0.301 |
| newsdev2016-enro-engron.eng.ron | 17.3 | 0.470 |
| newsdev2016-enro-roneng.ron.eng | 26.0 | 0.534 |
| newsdev2017-enlv-englav.eng.lav | 12.1 | 0.416 |
| newsdev2017-enlv-laveng.lav.eng | 15.9 | 0.443 |
| newsdev2019-engu-engguj.eng.guj | 2.5 | 0.200 |
| newsdev2019-engu-gujeng.guj.eng | 7.1 | 0.302 |
| newsdev2019-enlt-englit.eng.lit | 10.6 | 0.407 |
| newsdev2019-enlt-liteng.lit.eng | 14.9 | 0.428 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 22.6 | 0.507 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 23.5 | 0.495 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 25.1 | 0.528 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 26.4 | 0.517 |
| newssyscomb2009-cesdeu.ces.deu | 13.1 | 0.432 |
| newssyscomb2009-ceseng.ces.eng | 18.4 | 0.463 |
| newssyscomb2009-cesfra.ces.fra | 15.5 | 0.452 |
| newssyscomb2009-cesita.ces.ita | 14.8 | 0.458 |
| newssyscomb2009-cesspa.ces.spa | 18.4 | 0.462 |
| newssyscomb2009-deuces.deu.ces | 10.5 | 0.381 |
| newssyscomb2009-deueng.deu.eng | 19.5 | 0.467 |
| newssyscomb2009-deufra.deu.fra | 16.4 | 0.459 |
| newssyscomb2009-deuita.deu.ita | 15.5 | 0.456 |
| newssyscomb2009-deuspa.deu.spa | 18.4 | 0.466 |
| newssyscomb2009-engces.eng.ces | 11.9 | 0.394 |
| newssyscomb2009-engdeu.eng.deu | 13.9 | 0.446 |
| newssyscomb2009-engfra.eng.fra | 20.7 | 0.502 |
| newssyscomb2009-engita.eng.ita | 21.3 | 0.516 |
| newssyscomb2009-engspa.eng.spa | 22.3 | 0.506 |
| newssyscomb2009-fraces.fra.ces | 11.5 | 0.390 |
| newssyscomb2009-fradeu.fra.deu | 13.4 | 0.437 |
| newssyscomb2009-fraeng.fra.eng | 22.8 | 0.499 |
| newssyscomb2009-fraita.fra.ita | 22.2 | 0.533 |
| newssyscomb2009-fraspa.fra.spa | 26.2 | 0.539 |
| newssyscomb2009-itaces.ita.ces | 12.3 | 0.397 |
| newssyscomb2009-itadeu.ita.deu | 13.3 | 0.436 |
| newssyscomb2009-itaeng.ita.eng | 24.7 | 0.517 |
| newssyscomb2009-itafra.ita.fra | 24.0 | 0.528 |
| newssyscomb2009-itaspa.ita.spa | 26.3 | 0.537 |
| newssyscomb2009-spaces.spa.ces | 12.0 | 0.400 |
| newssyscomb2009-spadeu.spa.deu | 13.9 | 0.440 |
| newssyscomb2009-spaeng.spa.eng | 22.9 | 0.509 |
| newssyscomb2009-spafra.spa.fra | 24.2 | 0.538 |
| newssyscomb2009-spaita.spa.ita | 24.5 | 0.547 |
| news-test2008-cesdeu.ces.deu | 12.0 | 0.422 |
| news-test2008-cesfra.ces.fra | 15.1 | 0.444 |
| news-test2008-cesspa.ces.spa | 16.4 | 0.451 |
| news-test2008-deuces.deu.ces | 9.9 | 0.369 |
| news-test2008-deueng.deu.eng | 18.0 | 0.456 |
| news-test2008-deufra.deu.fra | 16.4 | 0.453 |
| news-test2008-deuspa.deu.spa | 17.0 | 0.452 |
| news-test2008-engces.eng.ces | 10.5 | 0.375 |
| news-test2008-engdeu.eng.deu | 14.5 | 0.439 |
| news-test2008-engfra.eng.fra | 18.9 | 0.481 |
| news-test2008-engspa.eng.spa | 20.9 | 0.491 |
| news-test2008-fraces.fra.ces | 10.7 | 0.380 |
| news-test2008-fradeu.fra.deu | 13.8 | 0.435 |
| news-test2008-fraeng.fra.eng | 19.8 | 0.479 |
| news-test2008-fraspa.fra.spa | 24.8 | 0.522 |
| news-test2008-spaces.spa.ces | 11.0 | 0.380 |
| news-test2008-spadeu.spa.deu | 14.0 | 0.433 |
| news-test2008-spaeng.spa.eng | 20.6 | 0.488 |
| news-test2008-spafra.spa.fra | 23.3 | 0.518 |
| newstest2009-cesdeu.ces.deu | 12.9 | 0.427 |
| newstest2009-ceseng.ces.eng | 17.0 | 0.456 |
| newstest2009-cesfra.ces.fra | 15.4 | 0.447 |
| newstest2009-cesita.ces.ita | 14.9 | 0.454 |
| newstest2009-cesspa.ces.spa | 17.1 | 0.458 |
| newstest2009-deuces.deu.ces | 10.3 | 0.370 |
| newstest2009-deueng.deu.eng | 17.7 | 0.458 |
| newstest2009-deufra.deu.fra | 15.9 | 0.447 |
| newstest2009-deuita.deu.ita | 14.7 | 0.446 |
| newstest2009-deuspa.deu.spa | 17.2 | 0.453 |
| newstest2009-engces.eng.ces | 11.0 | 0.387 |
| newstest2009-engdeu.eng.deu | 13.6 | 0.440 |
| newstest2009-engfra.eng.fra | 20.3 | 0.496 |
| newstest2009-engita.eng.ita | 20.8 | 0.509 |
| newstest2009-engspa.eng.spa | 21.9 | 0.503 |
| newstest2009-fraces.fra.ces | 11.3 | 0.385 |
| newstest2009-fradeu.fra.deu | 14.0 | 0.436 |
| newstest2009-fraeng.fra.eng | 21.8 | 0.496 |
| newstest2009-fraita.fra.ita | 22.1 | 0.526 |
| newstest2009-fraspa.fra.spa | 24.8 | 0.525 |
| newstest2009-itaces.ita.ces | 11.5 | 0.382 |
| newstest2009-itadeu.ita.deu | 13.3 | 0.430 |
| newstest2009-itaeng.ita.eng | 23.6 | 0.508 |
| newstest2009-itafra.ita.fra | 22.9 | 0.516 |
| newstest2009-itaspa.ita.spa | 25.4 | 0.529 |
| newstest2009-spaces.spa.ces | 11.3 | 0.386 |
| newstest2009-spadeu.spa.deu | 13.5 | 0.434 |
| newstest2009-spaeng.spa.eng | 22.4 | 0.500 |
| newstest2009-spafra.spa.fra | 23.2 | 0.520 |
| newstest2009-spaita.spa.ita | 24.0 | 0.538 |
| newstest2010-cesdeu.ces.deu | 13.1 | 0.431 |
| newstest2010-ceseng.ces.eng | 16.9 | 0.459 |
| newstest2010-cesfra.ces.fra | 15.6 | 0.450 |
| newstest2010-cesspa.ces.spa | 18.5 | 0.467 |
| newstest2010-deuces.deu.ces | 11.4 | 0.387 |
| newstest2010-deueng.deu.eng | 19.6 | 0.481 |
| newstest2010-deufra.deu.fra | 17.7 | 0.471 |
| newstest2010-deuspa.deu.spa | 20.0 | 0.478 |
| newstest2010-engces.eng.ces | 11.4 | 0.393 |
| newstest2010-engdeu.eng.deu | 15.1 | 0.448 |
| newstest2010-engfra.eng.fra | 21.4 | 0.506 |
| newstest2010-engspa.eng.spa | 25.0 | 0.525 |
| newstest2010-fraces.fra.ces | 11.1 | 0.386 |
| newstest2010-fradeu.fra.deu | 14.2 | 0.442 |
| newstest2010-fraeng.fra.eng | 22.6 | 0.507 |
| newstest2010-fraspa.fra.spa | 26.6 | 0.542 |
| newstest2010-spaces.spa.ces | 12.2 | 0.396 |
| newstest2010-spadeu.spa.deu | 15.1 | 0.445 |
| newstest2010-spaeng.spa.eng | 24.3 | 0.521 |
| newstest2010-spafra.spa.fra | 24.8 | 0.536 |
| newstest2011-cesdeu.ces.deu | 13.1 | 0.423 |
| newstest2011-ceseng.ces.eng | 18.2 | 0.463 |
| newstest2011-cesfra.ces.fra | 17.4 | 0.458 |
| newstest2011-cesspa.ces.spa | 18.9 | 0.464 |
| newstest2011-deuces.deu.ces | 11.2 | 0.376 |
| newstest2011-deueng.deu.eng | 18.3 | 0.464 |
| newstest2011-deufra.deu.fra | 17.0 | 0.457 |
| newstest2011-deuspa.deu.spa | 19.2 | 0.464 |
| newstest2011-engces.eng.ces | 12.4 | 0.395 |
| newstest2011-engdeu.eng.deu | 14.5 | 0.437 |
| newstest2011-engfra.eng.fra | 23.6 | 0.522 |
| newstest2011-engspa.eng.spa | 26.6 | 0.530 |
| newstest2011-fraces.fra.ces | 12.5 | 0.394 |
| newstest2011-fradeu.fra.deu | 14.2 | 0.433 |
| newstest2011-fraeng.fra.eng | 24.3 | 0.521 |
| newstest2011-fraspa.fra.spa | 29.1 | 0.551 |
| newstest2011-spaces.spa.ces | 12.3 | 0.390 |
| newstest2011-spadeu.spa.deu | 14.4 | 0.435 |
| newstest2011-spaeng.spa.eng | 25.0 | 0.521 |
| newstest2011-spafra.spa.fra | 25.6 | 0.537 |
| newstest2012-cesdeu.ces.deu | 13.1 | 0.420 |
| newstest2012-ceseng.ces.eng | 17.5 | 0.457 |
| newstest2012-cesfra.ces.fra | 16.8 | 0.452 |
| newstest2012-cesrus.ces.rus | 11.2 | 0.379 |
| newstest2012-cesspa.ces.spa | 18.1 | 0.457 |
| newstest2012-deuces.deu.ces | 11.2 | 0.368 |
| newstest2012-deueng.deu.eng | 19.4 | 0.472 |
| newstest2012-deufra.deu.fra | 17.7 | 0.464 |
| newstest2012-deurus.deu.rus | 10.3 | 0.370 |
| newstest2012-deuspa.deu.spa | 19.6 | 0.467 |
| newstest2012-engces.eng.ces | 11.1 | 0.375 |
| newstest2012-engdeu.eng.deu | 14.6 | 0.440 |
| newstest2012-engfra.eng.fra | 22.4 | 0.512 |
| newstest2012-engrus.eng.rus | 17.6 | 0.452 |
| newstest2012-engspa.eng.spa | 26.5 | 0.527 |
| newstest2012-fraces.fra.ces | 11.9 | 0.383 |
| newstest2012-fradeu.fra.deu | 14.6 | 0.437 |
| newstest2012-fraeng.fra.eng | 24.3 | 0.516 |
| newstest2012-frarus.fra.rus | 11.9 | 0.393 |
| newstest2012-fraspa.fra.spa | 28.3 | 0.545 |
| newstest2012-rusces.rus.ces | 9.0 | 0.340 |
| newstest2012-rusdeu.rus.deu | 10.0 | 0.383 |
| newstest2012-ruseng.rus.eng | 22.4 | 0.492 |
| newstest2012-rusfra.rus.fra | 13.3 | 0.427 |
| newstest2012-russpa.rus.spa | 16.6 | 0.437 |
| newstest2012-spaces.spa.ces | 11.9 | 0.381 |
| newstest2012-spadeu.spa.deu | 14.8 | 0.440 |
| newstest2012-spaeng.spa.eng | 26.5 | 0.534 |
| newstest2012-spafra.spa.fra | 25.0 | 0.539 |
| newstest2012-sparus.spa.rus | 12.4 | 0.401 |
| newstest2013-cesdeu.ces.deu | 14.3 | 0.434 |
| newstest2013-ceseng.ces.eng | 18.5 | 0.463 |
| newstest2013-cesfra.ces.fra | 16.6 | 0.444 |
| newstest2013-cesrus.ces.rus | 13.6 | 0.406 |
| newstest2013-cesspa.ces.spa | 18.2 | 0.455 |
| newstest2013-deuces.deu.ces | 11.7 | 0.380 |
| newstest2013-deueng.deu.eng | 20.9 | 0.481 |
| newstest2013-deufra.deu.fra | 18.1 | 0.460 |
| newstest2013-deurus.deu.rus | 11.7 | 0.384 |
| newstest2013-deuspa.deu.spa | 19.4 | 0.463 |
| newstest2013-engces.eng.ces | 12.7 | 0.394 |
| newstest2013-engdeu.eng.deu | 16.7 | 0.455 |
| newstest2013-engfra.eng.fra | 22.7 | 0.499 |
| newstest2013-engrus.eng.rus | 13.3 | 0.408 |
| newstest2013-engspa.eng.spa | 23.6 | 0.506 |
| newstest2013-fraces.fra.ces | 11.8 | 0.379 |
| newstest2013-fradeu.fra.deu | 15.6 | 0.446 |
| newstest2013-fraeng.fra.eng | 23.6 | 0.506 |
| newstest2013-frarus.fra.rus | 12.9 | 0.399 |
| newstest2013-fraspa.fra.spa | 25.3 | 0.519 |
| newstest2013-rusces.rus.ces | 11.6 | 0.376 |
| newstest2013-rusdeu.rus.deu | 12.4 | 0.410 |
| newstest2013-ruseng.rus.eng | 17.8 | 0.448 |
| newstest2013-rusfra.rus.fra | 14.8 | 0.434 |
| newstest2013-russpa.rus.spa | 17.9 | 0.446 |
| newstest2013-spaces.spa.ces | 12.5 | 0.391 |
| newstest2013-spadeu.spa.deu | 15.9 | 0.449 |
| newstest2013-spaeng.spa.eng | 24.0 | 0.518 |
| newstest2013-spafra.spa.fra | 24.3 | 0.522 |
| newstest2013-sparus.spa.rus | 13.9 | 0.411 |
| newstest2014-csen-ceseng.ces.eng | 19.0 | 0.475 |
| newstest2014-deen-deueng.deu.eng | 19.2 | 0.468 |
| newstest2014-fren-fraeng.fra.eng | 23.9 | 0.521 |
| newstest2014-hien-enghin.eng.hin | 5.9 | 0.268 |
| newstest2014-hien-hineng.hin.eng | 8.8 | 0.348 |
| newstest2014-ruen-ruseng.rus.eng | 19.1 | 0.475 |
| newstest2015-encs-ceseng.ces.eng | 17.9 | 0.450 |
| newstest2015-encs-engces.eng.ces | 12.1 | 0.392 |
| newstest2015-ende-deueng.deu.eng | 21.1 | 0.480 |
| newstest2015-ende-engdeu.eng.deu | 18.7 | 0.475 |
| newstest2015-enru-engrus.eng.rus | 15.4 | 0.431 |
| newstest2015-enru-ruseng.rus.eng | 18.1 | 0.454 |
| newstest2016-encs-ceseng.ces.eng | 18.6 | 0.465 |
| newstest2016-encs-engces.eng.ces | 13.3 | 0.403 |
| newstest2016-ende-deueng.deu.eng | 24.0 | 0.508 |
| newstest2016-ende-engdeu.eng.deu | 21.4 | 0.494 |
| newstest2016-enro-engron.eng.ron | 16.8 | 0.457 |
| newstest2016-enro-roneng.ron.eng | 24.9 | 0.522 |
| newstest2016-enru-engrus.eng.rus | 13.7 | 0.417 |
| newstest2016-enru-ruseng.rus.eng | 17.3 | 0.453 |
| newstest2017-encs-ceseng.ces.eng | 16.7 | 0.444 |
| newstest2017-encs-engces.eng.ces | 10.9 | 0.375 |
| newstest2017-ende-deueng.deu.eng | 21.5 | 0.484 |
| newstest2017-ende-engdeu.eng.deu | 17.5 | 0.464 |
| newstest2017-enlv-englav.eng.lav | 9.1 | 0.388 |
| newstest2017-enlv-laveng.lav.eng | 11.5 | 0.404 |
| newstest2017-enru-engrus.eng.rus | 14.8 | 0.432 |
| newstest2017-enru-ruseng.rus.eng | 19.3 | 0.467 |
| newstest2018-encs-ceseng.ces.eng | 17.1 | 0.450 |
| newstest2018-encs-engces.eng.ces | 10.9 | 0.380 |
| newstest2018-ende-deueng.deu.eng | 26.0 | 0.518 |
| newstest2018-ende-engdeu.eng.deu | 24.3 | 0.514 |
| newstest2018-enru-engrus.eng.rus | 12.5 | 0.417 |
| newstest2018-enru-ruseng.rus.eng | 16.4 | 0.443 |
| newstest2019-csde-cesdeu.ces.deu | 13.9 | 0.432 |
| newstest2019-decs-deuces.deu.ces | 11.7 | 0.383 |
| newstest2019-deen-deueng.deu.eng | 22.2 | 0.483 |
| newstest2019-defr-deufra.deu.fra | 20.1 | 0.496 |
| newstest2019-encs-engces.eng.ces | 12.3 | 0.389 |
| newstest2019-ende-engdeu.eng.deu | 22.0 | 0.497 |
| newstest2019-engu-engguj.eng.guj | 3.1 | 0.208 |
| newstest2019-enlt-englit.eng.lit | 7.8 | 0.369 |
| newstest2019-enru-engrus.eng.rus | 14.6 | 0.408 |
| newstest2019-frde-fradeu.fra.deu | 16.4 | 0.483 |
| newstest2019-guen-gujeng.guj.eng | 6.1 | 0.288 |
| newstest2019-lten-liteng.lit.eng | 16.9 | 0.456 |
| newstest2019-ruen-ruseng.rus.eng | 20.2 | 0.468 |
| Tatoeba-test.afr-ang.afr.ang | 16.0 | 0.152 |
| Tatoeba-test.afr-ces.afr.ces | 10.2 | 0.333 |
| Tatoeba-test.afr-dan.afr.dan | 32.6 | 0.651 |
| Tatoeba-test.afr-deu.afr.deu | 34.5 | 0.556 |
| Tatoeba-test.afr-eng.afr.eng | 48.1 | 0.638 |
| Tatoeba-test.afr-enm.afr.enm | 10.2 | 0.416 |
| Tatoeba-test.afr-fra.afr.fra | 41.9 | 0.612 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.112 |
| Tatoeba-test.afr-gos.afr.gos | 0.3 | 0.068 |
| Tatoeba-test.afr-isl.afr.isl | 12.2 | 0.419 |
| Tatoeba-test.afr-ita.afr.ita | 48.7 | 0.637 |
| Tatoeba-test.afr-lat.afr.lat | 8.4 | 0.407 |
| Tatoeba-test.afr-ltz.afr.ltz | 19.0 | 0.357 |
| Tatoeba-test.afr-mkd.afr.mkd | 0.0 | 0.238 |
| Tatoeba-test.afr-msa.afr.msa | 1.4 | 0.080 |
| Tatoeba-test.afr-nld.afr.nld | 45.7 | 0.643 |
| Tatoeba-test.afr-nor.afr.nor | 55.3 | 0.687 |
| Tatoeba-test.afr-pol.afr.pol | 39.3 | 0.563 |
| Tatoeba-test.afr-por.afr.por | 33.9 | 0.586 |
| Tatoeba-test.afr-ron.afr.ron | 22.6 | 0.475 |
| Tatoeba-test.afr-rus.afr.rus | 32.1 | 0.525 |
| Tatoeba-test.afr-spa.afr.spa | 44.1 | 0.611 |
| Tatoeba-test.afr-swe.afr.swe | 71.6 | 0.814 |
| Tatoeba-test.afr-ukr.afr.ukr | 31.0 | 0.481 |
| Tatoeba-test.afr-yid.afr.yid | 100.0 | 1.000 |
| Tatoeba-test.ang-afr.ang.afr | 0.0 | 0.133 |
| Tatoeba-test.ang-ces.ang.ces | 5.5 | 0.129 |
| Tatoeba-test.ang-dan.ang.dan | 22.2 | 0.345 |
| Tatoeba-test.ang-deu.ang.deu | 6.3 | 0.251 |
| Tatoeba-test.ang-eng.ang.eng | 7.9 | 0.255 |
| Tatoeba-test.ang-enm.ang.enm | 0.8 | 0.133 |
| Tatoeba-test.ang-fao.ang.fao | 16.0 | 0.086 |
| Tatoeba-test.ang-fra.ang.fra | 6.0 | 0.185 |
| Tatoeba-test.ang-gos.ang.gos | 0.6 | 0.000 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.102 |
| Tatoeba-test.ang-ita.ang.ita | 13.2 | 0.301 |
| Tatoeba-test.ang-kur.ang.kur | 7.6 | 0.062 |
| Tatoeba-test.ang-lad.ang.lad | 0.2 | 0.025 |
| Tatoeba-test.ang-lat.ang.lat | 6.6 | 0.198 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.5 | 0.121 |
| Tatoeba-test.ang-por.ang.por | 11.4 | 0.498 |
| Tatoeba-test.ang-rus.ang.rus | 2.4 | 0.103 |
| Tatoeba-test.ang-spa.ang.spa | 8.1 | 0.249 |
| Tatoeba-test.ang-ukr.ang.ukr | 16.4 | 0.195 |
| Tatoeba-test.ang-yid.ang.yid | 1.1 | 0.117 |
| Tatoeba-test.arg-eng.arg.eng | 28.2 | 0.394 |
| Tatoeba-test.arg-fra.arg.fra | 39.8 | 0.445 |
| Tatoeba-test.arg-spa.arg.spa | 52.3 | 0.608 |
| Tatoeba-test.asm-dan.asm.dan | 8.6 | 0.261 |
| Tatoeba-test.asm-deu.asm.deu | 19.2 | 0.629 |
| Tatoeba-test.asm-eng.asm.eng | 18.2 | 0.369 |
| Tatoeba-test.asm-fra.asm.fra | 4.3 | 0.145 |
| Tatoeba-test.asm-hin.asm.hin | 4.5 | 0.366 |
| Tatoeba-test.asm-ita.asm.ita | 12.1 | 0.310 |
| Tatoeba-test.asm-zza.asm.zza | 8.1 | 0.050 |
| Tatoeba-test.ast-deu.ast.deu | 30.1 | 0.463 |
| Tatoeba-test.ast-eng.ast.eng | 27.6 | 0.441 |
| Tatoeba-test.ast-fra.ast.fra | 29.4 | 0.501 |
| Tatoeba-test.ast-gos.ast.gos | 2.6 | 0.030 |
| Tatoeba-test.ast-nds.ast.nds | 10.0 | 0.280 |
| Tatoeba-test.ast-nld.ast.nld | 100.0 | 1.000 |
| Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 |
| Tatoeba-test.ast-rus.ast.rus | 35.9 | 0.682 |
| Tatoeba-test.ast-spa.ast.spa | 41.7 | 0.601 |
| Tatoeba-test.awa-eng.awa.eng | 2.4 | 0.201 |
| Tatoeba-test.bel-bul.bel.bul | 53.7 | 0.808 |
| Tatoeba-test.bel-ces.bel.ces | 27.6 | 0.483 |
| Tatoeba-test.bel-cym.bel.cym | 32.6 | 0.449 |
| Tatoeba-test.bel-dan.bel.dan | 29.1 | 0.506 |
| Tatoeba-test.bel-deu.bel.deu | 29.5 | 0.522 |
| Tatoeba-test.bel-eng.bel.eng | 31.8 | 0.512 |
| Tatoeba-test.bel-fra.bel.fra | 30.9 | 0.527 |
| Tatoeba-test.bel-hbs.bel.hbs | 39.3 | 0.608 |
| Tatoeba-test.bel-ita.bel.ita | 32.8 | 0.540 |
| Tatoeba-test.bel-kur.bel.kur | 12.7 | 0.178 |
| Tatoeba-test.bel-lad.bel.lad | 4.5 | 0.185 |
| Tatoeba-test.bel-lat.bel.lat | 3.7 | 0.251 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-msa.bel.msa | 1.0 | 0.147 |
| Tatoeba-test.bel-nld.bel.nld | 27.1 | 0.481 |
| Tatoeba-test.bel-nor.bel.nor | 37.0 | 0.494 |
| Tatoeba-test.bel-pol.bel.pol | 34.8 | 0.565 |
| Tatoeba-test.bel-por.bel.por | 21.7 | 0.401 |
| Tatoeba-test.bel-rus.bel.rus | 42.3 | 0.643 |
| Tatoeba-test.bel-spa.bel.spa | 28.2 | 0.534 |
| Tatoeba-test.bel-ukr.bel.ukr | 41.6 | 0.643 |
| Tatoeba-test.bel-yid.bel.yid | 2.9 | 0.254 |
| Tatoeba-test.ben-deu.ben.deu | 34.6 | 0.408 |
| Tatoeba-test.ben-eng.ben.eng | 26.5 | 0.430 |
| Tatoeba-test.ben-fra.ben.fra | 21.6 | 0.466 |
| Tatoeba-test.ben-ita.ben.ita | 26.8 | 0.424 |
| Tatoeba-test.ben-spa.ben.spa | 28.9 | 0.473 |
| Tatoeba-test.bho-eng.bho.eng | 21.0 | 0.384 |
| Tatoeba-test.bho-fra.bho.fra | 100.0 | 1.000 |
| Tatoeba-test.bre-ces.bre.ces | 2.2 | 0.178 |
| Tatoeba-test.bre-deu.bre.deu | 7.7 | 0.296 |
| Tatoeba-test.bre-eng.bre.eng | 13.6 | 0.309 |
| Tatoeba-test.bre-fra.bre.fra | 8.6 | 0.251 |
| Tatoeba-test.bre-ita.bre.ita | 12.2 | 0.272 |
| Tatoeba-test.bre-msa.bre.msa | 0.9 | 0.081 |
| Tatoeba-test.bre-nld.bre.nld | 3.0 | 0.217 |
| Tatoeba-test.bre-nor.bre.nor | 1.4 | 0.158 |
| Tatoeba-test.bul-bel.bul.bel | 14.1 | 0.582 |
| Tatoeba-test.bul-ces.bul.ces | 52.8 | 0.725 |
| Tatoeba-test.bul-dan.bul.dan | 66.9 | 0.951 |
| Tatoeba-test.bul-deu.bul.deu | 31.2 | 0.530 |
| Tatoeba-test.bul-ell.bul.ell | 29.1 | 0.497 |
| Tatoeba-test.bul-eng.bul.eng | 36.5 | 0.547 |
| Tatoeba-test.bul-enm.bul.enm | 5.3 | 0.299 |
| Tatoeba-test.bul-fas.bul.fas | 8.9 | 0.511 |
| Tatoeba-test.bul-fra.bul.fra | 36.1 | 0.558 |
| Tatoeba-test.bul-hbs.bul.hbs | 100.0 | 1.000 |
| Tatoeba-test.bul-ita.bul.ita | 24.5 | 0.479 |
| Tatoeba-test.bul-lad.bul.lad | 8.1 | 0.302 |
| Tatoeba-test.bul-lat.bul.lat | 13.4 | 0.337 |
| Tatoeba-test.bul-mkd.bul.mkd | 38.2 | 0.811 |
| Tatoeba-test.bul-msa.bul.msa | 15.0 | 0.431 |
| Tatoeba-test.bul-nld.bul.nld | 31.8 | 0.505 |
| Tatoeba-test.bul-nor.bul.nor | 66.9 | 0.951 |
| Tatoeba-test.bul-pol.bul.pol | 24.4 | 0.461 |
| Tatoeba-test.bul-por.bul.por | 29.2 | 0.484 |
| Tatoeba-test.bul-ron.bul.ron | 42.7 | 0.776 |
| Tatoeba-test.bul-rus.bul.rus | 28.7 | 0.522 |
| Tatoeba-test.bul-spa.bul.spa | 32.1 | 0.520 |
| Tatoeba-test.bul-swe.bul.swe | 66.9 | 0.611 |
| Tatoeba-test.bul-ukr.bul.ukr | 34.3 | 0.567 |
| Tatoeba-test.bul-yid.bul.yid | 13.7 | 0.163 |
| Tatoeba-test.cat-deu.cat.deu | 31.0 | 0.523 |
| Tatoeba-test.cat-ell.cat.ell | 17.0 | 0.423 |
| Tatoeba-test.cat-eng.cat.eng | 39.4 | 0.582 |
| Tatoeba-test.cat-enm.cat.enm | 5.3 | 0.370 |
| Tatoeba-test.cat-fao.cat.fao | 16.0 | 0.301 |
| Tatoeba-test.cat-fra.cat.fra | 41.0 | 0.606 |
| Tatoeba-test.cat-ita.cat.ita | 39.8 | 0.626 |
| Tatoeba-test.cat-nld.cat.nld | 35.9 | 0.555 |
| Tatoeba-test.cat-pol.cat.pol | 23.0 | 0.456 |
| Tatoeba-test.cat-por.cat.por | 38.9 | 0.618 |
| Tatoeba-test.cat-ron.cat.ron | 16.0 | 0.311 |
| Tatoeba-test.cat-rus.cat.rus | 28.8 | 0.507 |
| Tatoeba-test.cat-spa.cat.spa | 55.2 | 0.731 |
| Tatoeba-test.cat-swe.cat.swe | 100.0 | 1.000 |
| Tatoeba-test.cat-ukr.cat.ukr | 30.8 | 0.512 |
| Tatoeba-test.cat-yid.cat.yid | 100.0 | 1.000 |
| Tatoeba-test.ces-afr.ces.afr | 17.0 | 0.426 |
| Tatoeba-test.ces-ang.ces.ang | 3.3 | 0.165 |
| Tatoeba-test.ces-bel.ces.bel | 23.3 | 0.466 |
| Tatoeba-test.ces-bre.ces.bre | 0.7 | 0.126 |
| Tatoeba-test.ces-bul.ces.bul | 45.2 | 0.690 |
| Tatoeba-test.ces-cor.ces.cor | 3.4 | 0.072 |
| Tatoeba-test.ces-dan.ces.dan | 12.7 | 0.706 |
| Tatoeba-test.ces-deu.ces.deu | 32.2 | 0.526 |
| Tatoeba-test.ces-ell.ces.ell | 24.4 | 0.422 |
| Tatoeba-test.ces-eng.ces.eng | 33.8 | 0.529 |
| Tatoeba-test.ces-enm.ces.enm | 1.7 | 0.157 |
| Tatoeba-test.ces-fao.ces.fao | 3.7 | 0.252 |
| Tatoeba-test.ces-fas.ces.fas | 20.1 | 0.229 |
| Tatoeba-test.ces-fra.ces.fra | 36.9 | 0.564 |
| Tatoeba-test.ces-fry.ces.fry | 7.7 | 0.338 |
| Tatoeba-test.ces-grc.ces.grc | 0.6 | 0.011 |
| Tatoeba-test.ces-hbs.ces.hbs | 39.7 | 0.580 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.0 | 0.230 |
| Tatoeba-test.ces-ita.ces.ita | 28.2 | 0.516 |
| Tatoeba-test.ces-lad.ces.lad | 1.7 | 0.303 |
| Tatoeba-test.ces-lat.ces.lat | 6.5 | 0.304 |
| Tatoeba-test.ces-ltz.ces.ltz | 6.6 | 0.202 |
| Tatoeba-test.ces-mkd.ces.mkd | 31.4 | 0.586 |
| Tatoeba-test.ces-msa.ces.msa | 6.4 | 0.312 |
| Tatoeba-test.ces-nds.ces.nds | 19.9 | 0.468 |
| Tatoeba-test.ces-nld.ces.nld | 35.1 | 0.535 |
| Tatoeba-test.ces-nor.ces.nor | 41.7 | 0.610 |
| Tatoeba-test.ces-pol.ces.pol | 30.5 | 0.530 |
| Tatoeba-test.ces-por.ces.por | 33.0 | 0.533 |
| Tatoeba-test.ces-ron.ces.ron | 9.9 | 0.406 |
| Tatoeba-test.ces-rus.ces.rus | 36.9 | 0.564 |
| Tatoeba-test.ces-slv.ces.slv | 4.1 | 0.236 |
| Tatoeba-test.ces-spa.ces.spa | 33.3 | 0.531 |
| Tatoeba-test.ces-swe.ces.swe | 51.4 | 0.586 |
| Tatoeba-test.ces-swg.ces.swg | 4.8 | 0.118 |
| Tatoeba-test.ces-ukr.ces.ukr | 34.6 | 0.522 |
| Tatoeba-test.ces-yid.ces.yid | 2.1 | 0.252 |
| Tatoeba-test.cor-ces.cor.ces | 8.9 | 0.233 |
| Tatoeba-test.cor-cym.cor.cym | 6.7 | 0.205 |
| Tatoeba-test.cor-deu.cor.deu | 4.8 | 0.211 |
| Tatoeba-test.cor-ell.cor.ell | 3.4 | 0.182 |
| Tatoeba-test.cor-eng.cor.eng | 4.4 | 0.193 |
| Tatoeba-test.cor-fra.cor.fra | 5.0 | 0.221 |
| Tatoeba-test.cor-ita.cor.ita | 6.6 | 0.211 |
| Tatoeba-test.cor-nld.cor.nld | 9.3 | 0.221 |
| Tatoeba-test.cor-nor.cor.nor | 19.6 | 0.282 |
| Tatoeba-test.cor-pol.cor.pol | 2.9 | 0.171 |
| Tatoeba-test.cor-por.cor.por | 4.3 | 0.187 |
| Tatoeba-test.cor-rus.cor.rus | 2.4 | 0.154 |
| Tatoeba-test.cor-spa.cor.spa | 3.6 | 0.187 |
| Tatoeba-test.cos-deu.cos.deu | 0.0 | 0.877 |
| Tatoeba-test.cos-eng.cos.eng | 39.2 | 0.473 |
| Tatoeba-test.cos-fra.cos.fra | 19.0 | 0.352 |
| Tatoeba-test.cos-pms.cos.pms | 1.6 | 0.066 |
| Tatoeba-test.csb-deu.csb.deu | 17.5 | 0.336 |
| Tatoeba-test.csb-eng.csb.eng | 14.0 | 0.347 |
| Tatoeba-test.csb-spa.csb.spa | 3.8 | 0.278 |
| Tatoeba-test.cym-bel.cym.bel | 100.0 | 1.000 |
| Tatoeba-test.cym-cor.cym.cor | 0.0 | 0.014 |
| Tatoeba-test.cym-deu.cym.deu | 32.6 | 0.507 |
| Tatoeba-test.cym-eng.cym.eng | 33.1 | 0.496 |
| Tatoeba-test.cym-fra.cym.fra | 27.0 | 0.447 |
| Tatoeba-test.cym-gla.cym.gla | 5.7 | 0.223 |
| Tatoeba-test.cym-gle.cym.gle | 13.1 | 0.380 |
| Tatoeba-test.cym-glv.cym.glv | 5.3 | 0.186 |
| Tatoeba-test.cym-ita.cym.ita | 28.3 | 0.498 |
| Tatoeba-test.cym-lat.cym.lat | 3.7 | 0.185 |
| Tatoeba-test.cym-msa.cym.msa | 8.0 | 0.067 |
| Tatoeba-test.cym-nor.cym.nor | 37.5 | 0.603 |
| Tatoeba-test.cym-pol.cym.pol | 37.8 | 0.488 |
| Tatoeba-test.cym-rus.cym.rus | 32.1 | 0.480 |
| Tatoeba-test.cym-spa.cym.spa | 31.6 | 0.523 |
| Tatoeba-test.cym-yid.cym.yid | 4.8 | 0.072 |
| Tatoeba-test.dan-afr.dan.afr | 40.5 | 0.774 |
| Tatoeba-test.dan-ang.dan.ang | 1.2 | 0.066 |
| Tatoeba-test.dan-asm.dan.asm | 13.1 | 0.156 |
| Tatoeba-test.dan-bel.dan.bel | 27.2 | 0.746 |
| Tatoeba-test.dan-bul.dan.bul | 35.4 | 0.529 |
| Tatoeba-test.dan-ces.dan.ces | 19.0 | 0.349 |
| Tatoeba-test.dan-deu.dan.deu | 35.8 | 0.582 |
| Tatoeba-test.dan-ell.dan.ell | 19.0 | 0.337 |
| Tatoeba-test.dan-eng.dan.eng | 43.4 | 0.609 |
| Tatoeba-test.dan-enm.dan.enm | 18.1 | 0.515 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.162 |
| Tatoeba-test.dan-fas.dan.fas | 14.1 | 0.410 |
| Tatoeba-test.dan-fra.dan.fra | 47.0 | 0.640 |
| Tatoeba-test.dan-gos.dan.gos | 2.6 | 0.195 |
| Tatoeba-test.dan-isl.dan.isl | 12.2 | 0.344 |
| Tatoeba-test.dan-ita.dan.ita | 36.3 | 0.589 |
| Tatoeba-test.dan-kur.dan.kur | 3.5 | 0.270 |
| Tatoeba-test.dan-lad.dan.lad | 0.4 | 0.096 |
| Tatoeba-test.dan-lat.dan.lat | 3.9 | 0.376 |
| Tatoeba-test.dan-lav.dan.lav | 68.7 | 0.786 |
| Tatoeba-test.dan-ltz.dan.ltz | 71.4 | 0.554 |
| Tatoeba-test.dan-mar.dan.mar | 3.7 | 0.220 |
| Tatoeba-test.dan-nds.dan.nds | 4.9 | 0.219 |
| Tatoeba-test.dan-nld.dan.nld | 47.2 | 0.650 |
| Tatoeba-test.dan-nor.dan.nor | 58.8 | 0.749 |
| Tatoeba-test.dan-pol.dan.pol | 27.1 | 0.527 |
| Tatoeba-test.dan-por.dan.por | 41.5 | 0.616 |
| Tatoeba-test.dan-ron.dan.ron | 100.0 | 1.000 |
| Tatoeba-test.dan-rus.dan.rus | 30.8 | 0.518 |
| Tatoeba-test.dan-spa.dan.spa | 36.6 | 0.578 |
| Tatoeba-test.dan-swe.dan.swe | 53.8 | 0.696 |
| Tatoeba-test.dan-swg.dan.swg | 4.8 | 0.184 |
| Tatoeba-test.dan-ukr.dan.ukr | 15.9 | 0.489 |
| Tatoeba-test.dan-urd.dan.urd | 21.7 | 0.544 |
| Tatoeba-test.dan-yid.dan.yid | 13.0 | 0.252 |
| Tatoeba-test.deu-afr.deu.afr | 37.5 | 0.566 |
| Tatoeba-test.deu-ang.deu.ang | 0.6 | 0.131 |
| Tatoeba-test.deu-asm.deu.asm | 20.0 | 0.580 |
| Tatoeba-test.deu-ast.deu.ast | 16.5 | 0.389 |
| Tatoeba-test.deu-bel.deu.bel | 19.6 | 0.450 |
| Tatoeba-test.deu-ben.deu.ben | 34.5 | 0.319 |
| Tatoeba-test.deu-bre.deu.bre | 3.2 | 0.196 |
| Tatoeba-test.deu-bul.deu.bul | 32.6 | 0.517 |
| Tatoeba-test.deu-cat.deu.cat | 28.4 | 0.503 |
| Tatoeba-test.deu-ces.deu.ces | 24.3 | 0.465 |
| Tatoeba-test.deu-cor.deu.cor | 0.2 | 0.043 |
| Tatoeba-test.deu-cos.deu.cos | 2.4 | 0.020 |
| Tatoeba-test.deu-csb.deu.csb | 4.4 | 0.178 |
| Tatoeba-test.deu-cym.deu.cym | 11.3 | 0.378 |
| Tatoeba-test.deu-dan.deu.dan | 37.8 | 0.579 |
| Tatoeba-test.deu-dsb.deu.dsb | 0.1 | 0.082 |
| Tatoeba-test.deu-egl.deu.egl | 3.3 | 0.050 |
| Tatoeba-test.deu-ell.deu.ell | 27.1 | 0.485 |
| Tatoeba-test.deu-eng.deu.eng | 34.7 | 0.539 |
| Tatoeba-test.deu-enm.deu.enm | 6.7 | 0.331 |
| Tatoeba-test.deu-fas.deu.fas | 4.5 | 0.235 |
| Tatoeba-test.deu-fra.deu.fra | 31.9 | 0.527 |
| Tatoeba-test.deu-frr.deu.frr | 0.2 | 0.101 |
| Tatoeba-test.deu-fry.deu.fry | 13.7 | 0.358 |
| Tatoeba-test.deu-gla.deu.gla | 7.2 | 0.304 |
| Tatoeba-test.deu-gle.deu.gle | 8.9 | 0.349 |
| Tatoeba-test.deu-glg.deu.glg | 28.9 | 0.513 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.157 |
| Tatoeba-test.deu-got.deu.got | 0.2 | 0.010 |
| Tatoeba-test.deu-grc.deu.grc | 0.1 | 0.005 |
| Tatoeba-test.deu-gsw.deu.gsw | 0.2 | 0.073 |
| Tatoeba-test.deu-hbs.deu.hbs | 23.2 | 0.470 |
| Tatoeba-test.deu-hin.deu.hin | 12.5 | 0.367 |
| Tatoeba-test.deu-hsb.deu.hsb | 5.4 | 0.249 |
| Tatoeba-test.deu-hye.deu.hye | 12.9 | 0.263 |
| Tatoeba-test.deu-isl.deu.isl | 16.5 | 0.395 |
| Tatoeba-test.deu-ita.deu.ita | 29.2 | 0.536 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.092 |
| Tatoeba-test.deu-kur.deu.kur | 11.2 | 0.183 |
| Tatoeba-test.deu-lad.deu.lad | 0.3 | 0.112 |
| Tatoeba-test.deu-lat.deu.lat | 6.4 | 0.301 |
| Tatoeba-test.deu-lav.deu.lav | 29.6 | 0.502 |
| Tatoeba-test.deu-lit.deu.lit | 17.4 | 0.445 |
| Tatoeba-test.deu-ltz.deu.ltz | 18.5 | 0.380 |
| Tatoeba-test.deu-mar.deu.mar | 7.9 | 0.245 |
| Tatoeba-test.deu-mkd.deu.mkd | 21.9 | 0.449 |
| Tatoeba-test.deu-msa.deu.msa | 21.9 | 0.478 |
| Tatoeba-test.deu-nds.deu.nds | 13.6 | 0.391 |
| Tatoeba-test.deu-nld.deu.nld | 37.2 | 0.574 |
| Tatoeba-test.deu-nor.deu.nor | 34.5 | 0.562 |
| Tatoeba-test.deu-oci.deu.oci | 4.7 | 0.261 |
| Tatoeba-test.deu-orv.deu.orv | 0.2 | 0.006 |
| Tatoeba-test.deu-pdc.deu.pdc | 0.6 | 0.064 |
| Tatoeba-test.deu-pms.deu.pms | 0.2 | 0.064 |
| Tatoeba-test.deu-pol.deu.pol | 23.6 | 0.477 |
| Tatoeba-test.deu-por.deu.por | 25.1 | 0.480 |
| Tatoeba-test.deu-prg.deu.prg | 0.2 | 0.070 |
| Tatoeba-test.deu-roh.deu.roh | 0.2 | 0.059 |
| Tatoeba-test.deu-rom.deu.rom | 5.2 | 0.179 |
| Tatoeba-test.deu-ron.deu.ron | 25.7 | 0.484 |
| Tatoeba-test.deu-rus.deu.rus | 27.1 | 0.494 |
| Tatoeba-test.deu-scn.deu.scn | 1.6 | 0.076 |
| Tatoeba-test.deu-sco.deu.sco | 10.8 | 0.281 |
| Tatoeba-test.deu-slv.deu.slv | 8.1 | 0.251 |
| Tatoeba-test.deu-spa.deu.spa | 31.5 | 0.534 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.144 |
| Tatoeba-test.deu-swe.deu.swe | 39.1 | 0.572 |
| Tatoeba-test.deu-swg.deu.swg | 0.1 | 0.088 |
| Tatoeba-test.deu-tgk.deu.tgk | 13.1 | 0.406 |
| Tatoeba-test.deu-ukr.deu.ukr | 27.2 | 0.489 |
| Tatoeba-test.deu-urd.deu.urd | 13.4 | 0.350 |
| Tatoeba-test.deu-yid.deu.yid | 6.0 | 0.262 |
| Tatoeba-test.dsb-deu.dsb.deu | 14.1 | 0.366 |
| Tatoeba-test.dsb-eng.dsb.eng | 19.0 | 0.424 |
| Tatoeba-test.dsb-nld.dsb.nld | 15.4 | 0.342 |
| Tatoeba-test.dsb-pol.dsb.pol | 15.2 | 0.315 |
| Tatoeba-test.dsb-rus.dsb.rus | 35.4 | 0.394 |
| Tatoeba-test.dsb-spa.dsb.spa | 12.6 | 0.401 |
| Tatoeba-test.egl-deu.egl.deu | 2.9 | 0.168 |
| Tatoeba-test.egl-eng.egl.eng | 5.2 | 0.207 |
| Tatoeba-test.egl-fra.egl.fra | 6.4 | 0.215 |
| Tatoeba-test.egl-ita.egl.ita | 1.6 | 0.180 |
| Tatoeba-test.egl-spa.egl.spa | 3.9 | 0.199 |
| Tatoeba-test.ell-bul.ell.bul | 26.6 | 0.483 |
| Tatoeba-test.ell-cat.ell.cat | 20.2 | 0.398 |
| Tatoeba-test.ell-ces.ell.ces | 12.1 | 0.380 |
| Tatoeba-test.ell-cor.ell.cor | 0.7 | 0.039 |
| Tatoeba-test.ell-dan.ell.dan | 53.7 | 0.513 |
| Tatoeba-test.ell-deu.ell.deu | 30.5 | 0.503 |
| Tatoeba-test.ell-eng.ell.eng | 43.1 | 0.589 |
| Tatoeba-test.ell-enm.ell.enm | 12.7 | 0.541 |
| Tatoeba-test.ell-fas.ell.fas | 5.3 | 0.210 |
| Tatoeba-test.ell-fra.ell.fra | 39.5 | 0.563 |
| Tatoeba-test.ell-glg.ell.glg | 11.6 | 0.343 |
| Tatoeba-test.ell-ita.ell.ita | 30.9 | 0.524 |
| Tatoeba-test.ell-msa.ell.msa | 57.6 | 0.572 |
| Tatoeba-test.ell-nds.ell.nds | 4.9 | 0.244 |
| Tatoeba-test.ell-nld.ell.nld | 38.0 | 0.562 |
| Tatoeba-test.ell-nor.ell.nor | 40.8 | 0.615 |
| Tatoeba-test.ell-pap.ell.pap | 72.6 | 0.846 |
| Tatoeba-test.ell-pol.ell.pol | 26.8 | 0.514 |
| Tatoeba-test.ell-por.ell.por | 27.1 | 0.493 |
| Tatoeba-test.ell-rus.ell.rus | 30.8 | 0.512 |
| Tatoeba-test.ell-spa.ell.spa | 30.8 | 0.475 |
| Tatoeba-test.ell-swe.ell.swe | 36.0 | 0.521 |
| Tatoeba-test.ell-ukr.ell.ukr | 12.6 | 0.364 |
| Tatoeba-test.ell-yid.ell.yid | 100.0 | 1.000 |
| Tatoeba-test.eng-afr.eng.afr | 46.1 | 0.633 |
| Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.136 |
| Tatoeba-test.eng-arg.eng.arg | 5.1 | 0.199 |
| Tatoeba-test.eng-asm.eng.asm | 0.8 | 0.208 |
| Tatoeba-test.eng-ast.eng.ast | 16.8 | 0.380 |
| Tatoeba-test.eng-awa.eng.awa | 0.2 | 0.002 |
| Tatoeba-test.eng-bel.eng.bel | 16.6 | 0.415 |
| Tatoeba-test.eng-ben.eng.ben | 7.0 | 0.321 |
| Tatoeba-test.eng-bho.eng.bho | 0.2 | 0.003 |
| Tatoeba-test.eng-bre.eng.bre | 6.6 | 0.251 |
| Tatoeba-test.eng-bul.eng.bul | 31.5 | 0.513 |
| Tatoeba-test.eng-cat.eng.cat | 33.5 | 0.550 |
| Tatoeba-test.eng-ces.eng.ces | 25.6 | 0.466 |
| Tatoeba-test.eng-cor.eng.cor | 0.1 | 0.035 |
| Tatoeba-test.eng-cos.eng.cos | 0.8 | 0.135 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.194 |
| Tatoeba-test.eng-cym.eng.cym | 18.8 | 0.422 |
| Tatoeba-test.eng-dan.eng.dan | 41.2 | 0.591 |
| Tatoeba-test.eng-deu.eng.deu | 27.9 | 0.503 |
| Tatoeba-test.eng-dsb.eng.dsb | 0.7 | 0.125 |
| Tatoeba-test.eng-egl.eng.egl | 0.1 | 0.062 |
| Tatoeba-test.eng-ell.eng.ell | 30.7 | 0.540 |
| Tatoeba-test.eng-enm.eng.enm | 4.9 | 0.283 |
| Tatoeba-test.eng-ext.eng.ext | 3.9 | 0.217 |
| Tatoeba-test.eng-fao.eng.fao | 5.9 | 0.276 |
| Tatoeba-test.eng-fas.eng.fas | 4.8 | 0.239 |
| Tatoeba-test.eng-fra.eng.fra | 34.6 | 0.551 |
| Tatoeba-test.eng-frm.eng.frm | 0.2 | 0.099 |
| Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.040 |
| Tatoeba-test.eng-fry.eng.fry | 13.1 | 0.357 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.4 | 0.085 |
| Tatoeba-test.eng-gla.eng.gla | 7.4 | 0.293 |
| Tatoeba-test.eng-gle.eng.gle | 20.0 | 0.415 |
| Tatoeba-test.eng-glg.eng.glg | 29.9 | 0.528 |
| Tatoeba-test.eng-glv.eng.glv | 5.9 | 0.220 |
| Tatoeba-test.eng-gos.eng.gos | 0.5 | 0.137 |
| Tatoeba-test.eng-got.eng.got | 0.1 | 0.009 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.103 |
| Tatoeba-test.eng-guj.eng.guj | 6.4 | 0.241 |
| Tatoeba-test.eng-hat.eng.hat | 28.2 | 0.460 |
| Tatoeba-test.eng-hbs.eng.hbs | 26.0 | 0.485 |
| Tatoeba-test.eng-hif.eng.hif | 0.8 | 0.228 |
| Tatoeba-test.eng-hin.eng.hin | 11.2 | 0.364 |
| Tatoeba-test.eng-hsb.eng.hsb | 10.6 | 0.277 |
| Tatoeba-test.eng-hye.eng.hye | 10.9 | 0.307 |
| Tatoeba-test.eng-isl.eng.isl | 13.8 | 0.368 |
| Tatoeba-test.eng-ita.eng.ita | 33.8 | 0.571 |
| Tatoeba-test.eng-jdt.eng.jdt | 3.0 | 0.007 |
| Tatoeba-test.eng-kok.eng.kok | 4.8 | 0.005 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.4 | 0.092 |
| Tatoeba-test.eng-kur.eng.kur | 9.0 | 0.174 |
| Tatoeba-test.eng-lad.eng.lad | 0.5 | 0.144 |
| Tatoeba-test.eng-lah.eng.lah | 0.1 | 0.000 |
| Tatoeba-test.eng-lat.eng.lat | 7.7 | 0.333 |
| Tatoeba-test.eng-lav.eng.lav | 25.1 | 0.480 |
| Tatoeba-test.eng-lij.eng.lij | 0.4 | 0.101 |
| Tatoeba-test.eng-lit.eng.lit | 21.0 | 0.492 |
| Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.143 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.5 | 0.135 |
| Tatoeba-test.eng-ltz.eng.ltz | 15.6 | 0.345 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.251 |
| Tatoeba-test.eng-mar.eng.mar | 9.5 | 0.326 |
| Tatoeba-test.eng-mfe.eng.mfe | 54.1 | 0.747 |
| Tatoeba-test.eng-mkd.eng.mkd | 29.8 | 0.503 |
| Tatoeba-test.eng-msa.eng.msa | 20.0 | 0.449 |
| Tatoeba-test.eng-mwl.eng.mwl | 9.3 | 0.231 |
| Tatoeba-test.eng-nds.eng.nds | 12.2 | 0.357 |
| Tatoeba-test.eng-nep.eng.nep | 0.2 | 0.003 |
| Tatoeba-test.eng-nld.eng.nld | 37.1 | 0.570 |
| Tatoeba-test.eng-non.eng.non | 0.5 | 0.078 |
| Tatoeba-test.eng-nor.eng.nor | 38.4 | 0.575 |
| Tatoeba-test.eng-oci.eng.oci | 4.8 | 0.249 |
| Tatoeba-test.eng-ori.eng.ori | 2.8 | 0.185 |
| Tatoeba-test.eng-orv.eng.orv | 0.1 | 0.011 |
| Tatoeba-test.eng-oss.eng.oss | 2.6 | 0.166 |
| Tatoeba-test.eng-pan.eng.pan | 2.6 | 0.214 |
| Tatoeba-test.eng-pap.eng.pap | 39.8 | 0.566 |
| Tatoeba-test.eng-pdc.eng.pdc | 1.0 | 0.131 |
| Tatoeba-test.eng-pms.eng.pms | 0.9 | 0.124 |
| Tatoeba-test.eng-pol.eng.pol | 26.2 | 0.500 |
| Tatoeba-test.eng-por.eng.por | 31.5 | 0.545 |
| Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.088 |
| Tatoeba-test.eng-pus.eng.pus | 0.4 | 0.108 |
| Tatoeba-test.eng-roh.eng.roh | 1.8 | 0.192 |
| Tatoeba-test.eng-rom.eng.rom | 7.6 | 0.313 |
| Tatoeba-test.eng-ron.eng.ron | 27.6 | 0.508 |
| Tatoeba-test.eng-rue.eng.rue | 0.1 | 0.011 |
| Tatoeba-test.eng-rus.eng.rus | 28.6 | 0.496 |
| Tatoeba-test.eng-san.eng.san | 2.0 | 0.098 |
| Tatoeba-test.eng-scn.eng.scn | 0.9 | 0.080 |
| Tatoeba-test.eng-sco.eng.sco | 24.5 | 0.501 |
| Tatoeba-test.eng-sgs.eng.sgs | 1.3 | 0.105 |
| Tatoeba-test.eng-sin.eng.sin | 3.0 | 0.178 |
| Tatoeba-test.eng-slv.eng.slv | 12.5 | 0.298 |
| Tatoeba-test.eng-snd.eng.snd | 1.7 | 0.214 |
| Tatoeba-test.eng-spa.eng.spa | 36.3 | 0.575 |
| Tatoeba-test.eng-sqi.eng.sqi | 22.1 | 0.459 |
| Tatoeba-test.eng-stq.eng.stq | 5.2 | 0.316 |
| Tatoeba-test.eng-swe.eng.swe | 42.4 | 0.591 |
| Tatoeba-test.eng-swg.eng.swg | 0.6 | 0.145 |
| Tatoeba-test.eng-tgk.eng.tgk | 1.9 | 0.255 |
| Tatoeba-test.eng-tly.eng.tly | 0.3 | 0.054 |
| Tatoeba-test.eng-ukr.eng.ukr | 27.3 | 0.478 |
| Tatoeba-test.eng-urd.eng.urd | 7.0 | 0.310 |
| Tatoeba-test.eng-vec.eng.vec | 0.9 | 0.116 |
| Tatoeba-test.eng-wln.eng.wln | 4.0 | 0.164 |
| Tatoeba-test.eng-yid.eng.yid | 5.9 | 0.260 |
| Tatoeba-test.eng-zza.eng.zza | 0.4 | 0.071 |
| Tatoeba-test.enm-afr.enm.afr | 20.1 | 0.420 |
| Tatoeba-test.enm-ang.enm.ang | 0.6 | 0.057 |
| Tatoeba-test.enm-bul.enm.bul | 22.8 | 0.278 |
| Tatoeba-test.enm-cat.enm.cat | 9.0 | 0.360 |
| Tatoeba-test.enm-ces.enm.ces | 19.0 | 0.324 |
| Tatoeba-test.enm-dan.enm.dan | 35.8 | 0.523 |
| Tatoeba-test.enm-deu.enm.deu | 35.7 | 0.495 |
| Tatoeba-test.enm-ell.enm.ell | 42.7 | 0.644 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.477 |
| Tatoeba-test.enm-fas.enm.fas | 4.3 | 0.141 |
| Tatoeba-test.enm-fra.enm.fra | 9.0 | 0.345 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.289 |
| Tatoeba-test.enm-gle.enm.gle | 4.1 | 0.143 |
| Tatoeba-test.enm-gos.enm.gos | 3.0 | 0.247 |
| Tatoeba-test.enm-hbs.enm.hbs | 11.6 | 0.294 |
| Tatoeba-test.enm-isl.enm.isl | 19.0 | 0.220 |
| Tatoeba-test.enm-ita.enm.ita | 4.8 | 0.188 |
| Tatoeba-test.enm-ksh.enm.ksh | 6.1 | 0.136 |
| Tatoeba-test.enm-kur.enm.kur | 16.0 | 0.054 |
| Tatoeba-test.enm-lad.enm.lad | 0.7 | 0.124 |
| Tatoeba-test.enm-lat.enm.lat | 5.4 | 0.238 |
| Tatoeba-test.enm-mwl.enm.mwl | 10.5 | 0.155 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.427 |
| Tatoeba-test.enm-nld.enm.nld | 38.9 | 0.611 |
| Tatoeba-test.enm-nor.enm.nor | 6.8 | 0.276 |
| Tatoeba-test.enm-oci.enm.oci | 10.5 | 0.138 |
| Tatoeba-test.enm-por.enm.por | 12.7 | 0.088 |
| Tatoeba-test.enm-ron.enm.ron | 7.6 | 0.109 |
| Tatoeba-test.enm-rus.enm.rus | 18.8 | 0.254 |
| Tatoeba-test.enm-spa.enm.spa | 21.4 | 0.339 |
| Tatoeba-test.enm-ukr.enm.ukr | 4.0 | 0.440 |
| Tatoeba-test.enm-yid.enm.yid | 5.3 | 0.231 |
| Tatoeba-test.ext-eng.ext.eng | 24.9 | 0.420 |
| Tatoeba-test.fao-ang.fao.ang | 0.0 | 0.056 |
| Tatoeba-test.fao-cat.fao.cat | 16.0 | 0.171 |
| Tatoeba-test.fao-ces.fao.ces | 2.1 | 0.258 |
| Tatoeba-test.fao-dan.fao.dan | 43.5 | 0.557 |
| Tatoeba-test.fao-eng.fao.eng | 21.3 | 0.402 |
| Tatoeba-test.fao-fra.fao.fra | 3.0 | 0.164 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.142 |
| Tatoeba-test.fao-isl.fao.isl | 10.5 | 0.131 |
| Tatoeba-test.fao-msa.fao.msa | 0.6 | 0.087 |
| Tatoeba-test.fao-nor.fao.nor | 26.2 | 0.443 |
| Tatoeba-test.fao-pol.fao.pol | 3.6 | 0.176 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.fas-bul.fas.bul | 5.8 | 0.163 |
| Tatoeba-test.fas-ces.fas.ces | 14.5 | 0.104 |
| Tatoeba-test.fas-dan.fas.dan | 53.7 | 0.504 |
| Tatoeba-test.fas-deu.fas.deu | 8.5 | 0.311 |
| Tatoeba-test.fas-ell.fas.ell | 8.7 | 0.259 |
| Tatoeba-test.fas-eng.fas.eng | 10.3 | 0.303 |
| Tatoeba-test.fas-enm.fas.enm | 1.3 | 0.006 |
| Tatoeba-test.fas-fra.fas.fra | 8.6 | 0.331 |
| Tatoeba-test.fas-ita.fas.ita | 7.2 | 0.301 |
| Tatoeba-test.fas-lad.fas.lad | 0.4 | 0.074 |
| Tatoeba-test.fas-lat.fas.lat | 14.4 | 0.256 |
| Tatoeba-test.fas-msa.fas.msa | 9.8 | 0.325 |
| Tatoeba-test.fas-nds.fas.nds | 6.6 | 0.127 |
| Tatoeba-test.fas-nld.fas.nld | 50.0 | 0.657 |
| Tatoeba-test.fas-pol.fas.pol | 4.5 | 0.223 |
| Tatoeba-test.fas-por.fas.por | 8.6 | 0.316 |
| Tatoeba-test.fas-ron.fas.ron | 19.1 | 0.445 |
| Tatoeba-test.fas-rus.fas.rus | 9.8 | 0.313 |
| Tatoeba-test.fas-spa.fas.spa | 9.1 | 0.318 |
| Tatoeba-test.fas-ukr.fas.ukr | 4.8 | 0.213 |
| Tatoeba-test.fas-yid.fas.yid | 2.0 | 0.138 |
| Tatoeba-test.fra-afr.fra.afr | 49.7 | 0.630 |
| Tatoeba-test.fra-ang.fra.ang | 1.0 | 0.105 |
| Tatoeba-test.fra-arg.fra.arg | 0.0 | 0.011 |
| Tatoeba-test.fra-asm.fra.asm | 4.1 | 0.194 |
| Tatoeba-test.fra-ast.fra.ast | 23.0 | 0.410 |
| Tatoeba-test.fra-bel.fra.bel | 22.2 | 0.448 |
| Tatoeba-test.fra-ben.fra.ben | 6.4 | 0.341 |
| Tatoeba-test.fra-bho.fra.bho | 1.2 | 0.035 |
| Tatoeba-test.fra-bre.fra.bre | 3.4 | 0.204 |
| Tatoeba-test.fra-bul.fra.bul | 31.2 | 0.528 |
| Tatoeba-test.fra-cat.fra.cat | 33.9 | 0.570 |
| Tatoeba-test.fra-ces.fra.ces | 26.9 | 0.490 |
| Tatoeba-test.fra-cor.fra.cor | 0.2 | 0.039 |
| Tatoeba-test.fra-cos.fra.cos | 0.3 | 0.061 |
| Tatoeba-test.fra-cym.fra.cym | 17.3 | 0.455 |
| Tatoeba-test.fra-dan.fra.dan | 47.1 | 0.634 |
| Tatoeba-test.fra-deu.fra.deu | 31.1 | 0.530 |
| Tatoeba-test.fra-egl.fra.egl | 0.7 | 0.061 |
| Tatoeba-test.fra-ell.fra.ell | 32.4 | 0.544 |
| Tatoeba-test.fra-eng.fra.eng | 40.1 | 0.583 |
| Tatoeba-test.fra-enm.fra.enm | 5.1 | 0.207 |
| Tatoeba-test.fra-fao.fra.fao | 1.8 | 0.304 |
| Tatoeba-test.fra-fas.fra.fas | 5.6 | 0.233 |
| Tatoeba-test.fra-frm.fra.frm | 0.3 | 0.149 |
| Tatoeba-test.fra-frr.fra.frr | 6.4 | 0.412 |
| Tatoeba-test.fra-fry.fra.fry | 11.4 | 0.357 |
| Tatoeba-test.fra-gcf.fra.gcf | 0.1 | 0.067 |
| Tatoeba-test.fra-gla.fra.gla | 9.1 | 0.316 |
| Tatoeba-test.fra-gle.fra.gle | 16.8 | 0.416 |
| Tatoeba-test.fra-glg.fra.glg | 34.5 | 0.562 |
| Tatoeba-test.fra-gos.fra.gos | 5.5 | 0.204 |
| Tatoeba-test.fra-got.fra.got | 0.2 | 0.001 |
| Tatoeba-test.fra-grc.fra.grc | 0.1 | 0.006 |
| Tatoeba-test.fra-hat.fra.hat | 20.8 | 0.424 |
| Tatoeba-test.fra-hbs.fra.hbs | 28.9 | 0.511 |
| Tatoeba-test.fra-hin.fra.hin | 5.1 | 0.336 |
| Tatoeba-test.fra-hye.fra.hye | 11.5 | 0.401 |
| Tatoeba-test.fra-isl.fra.isl | 17.2 | 0.362 |
| Tatoeba-test.fra-ita.fra.ita | 37.7 | 0.606 |
| Tatoeba-test.fra-ksh.fra.ksh | 2.8 | 0.148 |
| Tatoeba-test.fra-kur.fra.kur | 14.3 | 0.188 |
| Tatoeba-test.fra-lad.fra.lad | 0.4 | 0.129 |
| Tatoeba-test.fra-lat.fra.lat | 2.8 | 0.258 |
| Tatoeba-test.fra-lav.fra.lav | 30.3 | 0.490 |
| Tatoeba-test.fra-lij.fra.lij | 0.3 | 0.099 |
| Tatoeba-test.fra-lit.fra.lit | 18.3 | 0.461 |
| Tatoeba-test.fra-lld.fra.lld | 0.6 | 0.185 |
| Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.163 |
| Tatoeba-test.fra-ltz.fra.ltz | 15.3 | 0.385 |
| Tatoeba-test.fra-mar.fra.mar | 45.7 | 0.393 |
| Tatoeba-test.fra-mkd.fra.mkd | 29.5 | 0.498 |
| Tatoeba-test.fra-msa.fra.msa | 19.4 | 0.456 |
| Tatoeba-test.fra-nds.fra.nds | 12.9 | 0.356 |
| Tatoeba-test.fra-nld.fra.nld | 33.0 | 0.532 |
| Tatoeba-test.fra-non.fra.non | 1.2 | 0.072 |
| Tatoeba-test.fra-nor.fra.nor | 35.1 | 0.553 |
| Tatoeba-test.fra-oci.fra.oci | 6.8 | 0.313 |
| Tatoeba-test.fra-orv.fra.orv | 0.2 | 0.004 |
| Tatoeba-test.fra-oss.fra.oss | 3.6 | 0.112 |
| Tatoeba-test.fra-pap.fra.pap | 78.3 | 0.917 |
| Tatoeba-test.fra-pcd.fra.pcd | 0.1 | 0.084 |
| Tatoeba-test.fra-pms.fra.pms | 0.3 | 0.117 |
| Tatoeba-test.fra-pol.fra.pol | 22.4 | 0.468 |
| Tatoeba-test.fra-por.fra.por | 33.0 | 0.559 |
| Tatoeba-test.fra-prg.fra.prg | 0.6 | 0.084 |
| Tatoeba-test.fra-roh.fra.roh | 5.9 | 0.278 |
| Tatoeba-test.fra-rom.fra.rom | 4.2 | 0.257 |
| Tatoeba-test.fra-ron.fra.ron | 29.7 | 0.531 |
| Tatoeba-test.fra-rus.fra.rus | 28.8 | 0.498 |
| Tatoeba-test.fra-scn.fra.scn | 0.4 | 0.056 |
| Tatoeba-test.fra-sco.fra.sco | 1.7 | 0.222 |
| Tatoeba-test.fra-slv.fra.slv | 2.4 | 0.207 |
| Tatoeba-test.fra-spa.fra.spa | 38.6 | 0.598 |
| Tatoeba-test.fra-sqi.fra.sqi | 23.9 | 0.455 |
| Tatoeba-test.fra-srd.fra.srd | 1.2 | 0.159 |
| Tatoeba-test.fra-swe.fra.swe | 44.2 | 0.609 |
| Tatoeba-test.fra-swg.fra.swg | 2.4 | 0.123 |
| Tatoeba-test.fra-tgk.fra.tgk | 2.8 | 0.244 |
| Tatoeba-test.fra-tly.fra.tly | 0.5 | 0.034 |
| Tatoeba-test.fra-ukr.fra.ukr | 26.7 | 0.474 |
| Tatoeba-test.fra-urd.fra.urd | 2.3 | 0.333 |
| Tatoeba-test.fra-vec.fra.vec | 0.6 | 0.088 |
| Tatoeba-test.fra-wln.fra.wln | 5.3 | 0.178 |
| Tatoeba-test.fra-yid.fra.yid | 8.7 | 0.271 |
| Tatoeba-test.frm-eng.frm.eng | 19.2 | 0.394 |
| Tatoeba-test.frm-fra.frm.fra | 12.3 | 0.482 |
| Tatoeba-test.frr-deu.frr.deu | 8.3 | 0.286 |
| Tatoeba-test.frr-eng.frr.eng | 6.1 | 0.181 |
| Tatoeba-test.frr-fra.frr.fra | 12.7 | 0.535 |
| Tatoeba-test.frr-fry.frr.fry | 4.1 | 0.144 |
| Tatoeba-test.frr-gos.frr.gos | 0.5 | 0.033 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.127 |
| Tatoeba-test.frr-nld.frr.nld | 6.9 | 0.233 |
| Tatoeba-test.frr-stq.frr.stq | 0.5 | 0.045 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 0.244 |
| Tatoeba-test.fry-ces.fry.ces | 4.2 | 0.280 |
| Tatoeba-test.fry-deu.fry.deu | 21.7 | 0.448 |
| Tatoeba-test.fry-eng.fry.eng | 22.9 | 0.431 |
| Tatoeba-test.fry-enm.fry.enm | 10.7 | 0.140 |
| Tatoeba-test.fry-fra.fry.fra | 31.8 | 0.455 |
| Tatoeba-test.fry-frr.fry.frr | 0.5 | 0.040 |
| Tatoeba-test.fry-gos.fry.gos | 0.7 | 0.204 |
| Tatoeba-test.fry-ita.fry.ita | 34.8 | 0.528 |
| Tatoeba-test.fry-lat.fry.lat | 8.1 | 0.318 |
| Tatoeba-test.fry-ltz.fry.ltz | 21.4 | 0.324 |
| Tatoeba-test.fry-msa.fry.msa | 0.1 | 0.000 |
| Tatoeba-test.fry-nds.fry.nds | 6.6 | 0.127 |
| Tatoeba-test.fry-nld.fry.nld | 35.7 | 0.576 |
| Tatoeba-test.fry-nor.fry.nor | 32.6 | 0.511 |
| Tatoeba-test.fry-pol.fry.pol | 17.7 | 0.342 |
| Tatoeba-test.fry-por.fry.por | 12.1 | 0.304 |
| Tatoeba-test.fry-rus.fry.rus | 31.7 | 0.438 |
| Tatoeba-test.fry-spa.fry.spa | 30.6 | 0.479 |
| Tatoeba-test.fry-stq.fry.stq | 0.5 | 0.156 |
| Tatoeba-test.fry-swe.fry.swe | 27.5 | 0.247 |
| Tatoeba-test.fry-ukr.fry.ukr | 16.1 | 0.330 |
| Tatoeba-test.fry-yid.fry.yid | 4.0 | 0.167 |
| Tatoeba-test.gcf-eng.gcf.eng | 13.2 | 0.257 |
| Tatoeba-test.gcf-fra.gcf.fra | 6.0 | 0.241 |
| Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 |
| Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.427 |
| Tatoeba-test.gcf-rus.gcf.rus | 0.0 | 1.000 |
| Tatoeba-test.gcf-spa.gcf.spa | 31.8 | 0.374 |
| Tatoeba-test.gla-cym.gla.cym | 11.5 | 0.416 |
| Tatoeba-test.gla-deu.gla.deu | 15.1 | 0.348 |
| Tatoeba-test.gla-eng.gla.eng | 17.5 | 0.329 |
| Tatoeba-test.gla-fra.gla.fra | 13.1 | 0.346 |
| Tatoeba-test.gla-ita.gla.ita | 12.1 | 0.306 |
| Tatoeba-test.gla-ksh.gla.ksh | 8.0 | 0.035 |
| Tatoeba-test.gla-pol.gla.pol | 20.8 | 0.299 |
| Tatoeba-test.gla-por.gla.por | 13.7 | 0.355 |
| Tatoeba-test.gla-rus.gla.rus | 24.7 | 0.423 |
| Tatoeba-test.gla-spa.gla.spa | 12.7 | 0.322 |
| Tatoeba-test.gle-cym.gle.cym | 7.8 | 0.288 |
| Tatoeba-test.gle-deu.gle.deu | 13.5 | 0.390 |
| Tatoeba-test.gle-eng.gle.eng | 32.0 | 0.490 |
| Tatoeba-test.gle-enm.gle.enm | 5.0 | 0.135 |
| Tatoeba-test.gle-fra.gle.fra | 18.0 | 0.403 |
| Tatoeba-test.gle-glv.gle.glv | 16.9 | 0.377 |
| Tatoeba-test.gle-kur.gle.kur | 0.0 | 0.077 |
| Tatoeba-test.gle-lad.gle.lad | 2.4 | 0.328 |
| Tatoeba-test.gle-ron.gle.ron | 0.0 | 0.673 |
| Tatoeba-test.gle-rus.gle.rus | 2.5 | 0.139 |
| Tatoeba-test.gle-spa.gle.spa | 24.5 | 0.458 |
| Tatoeba-test.gle-yid.gle.yid | 13.3 | 0.324 |
| Tatoeba-test.glg-deu.glg.deu | 30.4 | 0.539 |
| Tatoeba-test.glg-ell.glg.ell | 30.2 | 0.448 |
| Tatoeba-test.glg-eng.glg.eng | 37.9 | 0.571 |
| Tatoeba-test.glg-fra.glg.fra | 45.8 | 0.627 |
| Tatoeba-test.glg-ita.glg.ita | 31.1 | 0.561 |
| Tatoeba-test.glg-nld.glg.nld | 36.2 | 0.573 |
| Tatoeba-test.glg-pol.glg.pol | 22.7 | 0.524 |
| Tatoeba-test.glg-por.glg.por | 47.4 | 0.674 |
| Tatoeba-test.glg-rus.glg.rus | 28.4 | 0.465 |
| Tatoeba-test.glg-spa.glg.spa | 53.2 | 0.704 |
| Tatoeba-test.glv-cym.glv.cym | 1.4 | 0.140 |
| Tatoeba-test.glv-eng.glv.eng | 3.2 | 0.104 |
| Tatoeba-test.glv-gle.glv.gle | 9.9 | 0.243 |
| Tatoeba-test.gos-afr.gos.afr | 6.2 | 0.269 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.056 |
| Tatoeba-test.gos-ast.gos.ast | 6.6 | 0.107 |
| Tatoeba-test.gos-dan.gos.dan | 12.0 | 0.356 |
| Tatoeba-test.gos-deu.gos.deu | 15.7 | 0.384 |
| Tatoeba-test.gos-eng.gos.eng | 14.8 | 0.320 |
| Tatoeba-test.gos-enm.gos.enm | 4.1 | 0.292 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.111 |
| Tatoeba-test.gos-fra.gos.fra | 8.4 | 0.321 |
| Tatoeba-test.gos-frr.gos.frr | 0.9 | 0.064 |
| Tatoeba-test.gos-fry.gos.fry | 13.5 | 0.361 |
| Tatoeba-test.gos-isl.gos.isl | 8.2 | 0.228 |
| Tatoeba-test.gos-ita.gos.ita | 31.9 | 0.610 |
| Tatoeba-test.gos-kur.gos.kur | 0.0 | 0.050 |
| Tatoeba-test.gos-lad.gos.lad | 0.5 | 0.010 |
| Tatoeba-test.gos-lat.gos.lat | 4.5 | 0.206 |
| Tatoeba-test.gos-ltz.gos.ltz | 4.2 | 0.220 |
| Tatoeba-test.gos-nds.gos.nds | 3.9 | 0.202 |
| Tatoeba-test.gos-nld.gos.nld | 16.8 | 0.389 |
| Tatoeba-test.gos-rus.gos.rus | 5.2 | 0.298 |
| Tatoeba-test.gos-spa.gos.spa | 24.7 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.4 | 0.137 |
| Tatoeba-test.gos-swe.gos.swe | 16.8 | 0.310 |
| Tatoeba-test.gos-ukr.gos.ukr | 5.4 | 0.370 |
| Tatoeba-test.gos-yid.gos.yid | 4.3 | 0.170 |
| Tatoeba-test.got-deu.got.deu | 0.6 | 0.044 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.050 |
| Tatoeba-test.got-fra.got.fra | 0.2 | 0.064 |
| Tatoeba-test.got-nor.got.nor | 3.1 | 0.013 |
| Tatoeba-test.got-spa.got.spa | 0.2 | 0.050 |
| Tatoeba-test.grc-ces.grc.ces | 2.7 | 0.155 |
| Tatoeba-test.grc-deu.grc.deu | 4.7 | 0.198 |
| Tatoeba-test.grc-eng.grc.eng | 1.9 | 0.146 |
| Tatoeba-test.grc-fra.grc.fra | 12.8 | 0.234 |
| Tatoeba-test.grc-lat.grc.lat | 0.5 | 0.114 |
| Tatoeba-test.grc-por.grc.por | 0.8 | 0.163 |
| Tatoeba-test.grc-spa.grc.spa | 2.4 | 0.141 |
| Tatoeba-test.gsw-deu.gsw.deu | 12.6 | 0.393 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.9 | 0.322 |
| Tatoeba-test.gsw-spa.gsw.spa | 19.0 | 0.308 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.301 |
| Tatoeba-test.guj-spa.guj.spa | 14.7 | 0.250 |
| Tatoeba-test.hat-eng.hat.eng | 38.5 | 0.522 |
| Tatoeba-test.hat-fra.hat.fra | 17.6 | 0.424 |
| Tatoeba-test.hat-nld.hat.nld | 32.0 | 0.472 |
| Tatoeba-test.hat-spa.hat.spa | 31.2 | 0.496 |
| Tatoeba-test.hbs-bel.hbs.bel | 40.1 | 0.579 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 27.8 | 0.543 |
| Tatoeba-test.hbs-deu.hbs.deu | 32.9 | 0.545 |
| Tatoeba-test.hbs-eng.hbs.eng | 38.6 | 0.563 |
| Tatoeba-test.hbs-enm.hbs.enm | 2.3 | 0.299 |
| Tatoeba-test.hbs-fra.hbs.fra | 33.3 | 0.548 |
| Tatoeba-test.hbs-ita.hbs.ita | 37.9 | 0.602 |
| Tatoeba-test.hbs-lat.hbs.lat | 9.8 | 0.289 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 38.0 | 0.718 |
| Tatoeba-test.hbs-nor.hbs.nor | 31.8 | 0.528 |
| Tatoeba-test.hbs-pol.hbs.pol | 31.7 | 0.548 |
| Tatoeba-test.hbs-por.hbs.por | 28.1 | 0.484 |
| Tatoeba-test.hbs-rus.hbs.rus | 38.9 | 0.596 |
| Tatoeba-test.hbs-spa.hbs.spa | 38.6 | 0.589 |
| Tatoeba-test.hbs-swe.hbs.swe | 100.0 | 1.000 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 36.0 | 0.557 |
| Tatoeba-test.hbs-urd.hbs.urd | 8.1 | 0.441 |
| Tatoeba-test.hif-eng.hif.eng | 8.9 | 0.439 |
| Tatoeba-test.hin-asm.hin.asm | 8.8 | 0.288 |
| Tatoeba-test.hin-deu.hin.deu | 26.1 | 0.414 |
| Tatoeba-test.hin-eng.hin.eng | 25.5 | 0.440 |
| Tatoeba-test.hin-fra.hin.fra | 30.1 | 0.449 |
| Tatoeba-test.hin-mar.hin.mar | 12.6 | 0.412 |
| Tatoeba-test.hin-nor.hin.nor | 9.9 | 0.416 |
| Tatoeba-test.hin-pol.hin.pol | 8.4 | 0.289 |
| Tatoeba-test.hin-rus.hin.rus | 21.2 | 0.395 |
| Tatoeba-test.hin-spa.hin.spa | 25.9 | 0.384 |
| Tatoeba-test.hin-swe.hin.swe | 100.0 | 1.000 |
| Tatoeba-test.hin-urd.hin.urd | 10.4 | 0.376 |
| Tatoeba-test.hsb-ces.hsb.ces | 18.1 | 0.373 |
| Tatoeba-test.hsb-deu.hsb.deu | 24.4 | 0.467 |
| Tatoeba-test.hsb-eng.hsb.eng | 42.9 | 0.583 |
| Tatoeba-test.hsb-spa.hsb.spa | 19.5 | 0.444 |
| Tatoeba-test.hye-deu.hye.deu | 11.6 | 0.323 |
| Tatoeba-test.hye-eng.hye.eng | 22.1 | 0.398 |
| Tatoeba-test.hye-fra.hye.fra | 32.1 | 0.386 |
| Tatoeba-test.hye-rus.hye.rus | 21.9 | 0.407 |
| Tatoeba-test.hye-spa.hye.spa | 29.3 | 0.476 |
| Tatoeba-test.isl-afr.isl.afr | 40.5 | 0.708 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.034 |
| Tatoeba-test.isl-dan.isl.dan | 38.1 | 0.582 |
| Tatoeba-test.isl-deu.isl.deu | 31.8 | 0.511 |
| Tatoeba-test.isl-eng.isl.eng | 29.8 | 0.483 |
| Tatoeba-test.isl-enm.isl.enm | 39.8 | 0.336 |
| Tatoeba-test.isl-fao.isl.fao | 26.3 | 0.441 |
| Tatoeba-test.isl-fra.isl.fra | 27.3 | 0.469 |
| Tatoeba-test.isl-gos.isl.gos | 1.9 | 0.047 |
| Tatoeba-test.isl-ita.isl.ita | 28.9 | 0.501 |
| Tatoeba-test.isl-lat.isl.lat | 2.6 | 0.135 |
| Tatoeba-test.isl-lav.isl.lav | 59.6 | 0.740 |
| Tatoeba-test.isl-msa.isl.msa | 0.1 | 0.012 |
| Tatoeba-test.isl-nor.isl.nor | 40.2 | 0.566 |
| Tatoeba-test.isl-pol.isl.pol | 19.7 | 0.358 |
| Tatoeba-test.isl-por.isl.por | 17.4 | 0.465 |
| Tatoeba-test.isl-rus.isl.rus | 18.0 | 0.386 |
| Tatoeba-test.isl-spa.isl.spa | 30.7 | 0.496 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.133 |
| Tatoeba-test.isl-swe.isl.swe | 38.1 | 0.539 |
| Tatoeba-test.ita-afr.ita.afr | 53.2 | 0.676 |
| Tatoeba-test.ita-ang.ita.ang | 3.8 | 0.125 |
| Tatoeba-test.ita-asm.ita.asm | 3.4 | 0.252 |
| Tatoeba-test.ita-bel.ita.bel | 24.2 | 0.460 |
| Tatoeba-test.ita-ben.ita.ben | 12.1 | 0.427 |
| Tatoeba-test.ita-bre.ita.bre | 4.7 | 0.287 |
| Tatoeba-test.ita-bul.ita.bul | 27.8 | 0.482 |
| Tatoeba-test.ita-cat.ita.cat | 40.6 | 0.608 |
| Tatoeba-test.ita-ces.ita.ces | 23.1 | 0.450 |
| Tatoeba-test.ita-cor.ita.cor | 0.8 | 0.060 |
| Tatoeba-test.ita-cym.ita.cym | 10.1 | 0.375 |
| Tatoeba-test.ita-dan.ita.dan | 38.9 | 0.577 |
| Tatoeba-test.ita-deu.ita.deu | 31.7 | 0.539 |
| Tatoeba-test.ita-egl.ita.egl | 0.2 | 0.061 |
| Tatoeba-test.ita-ell.ita.ell | 31.5 | 0.539 |
| Tatoeba-test.ita-eng.ita.eng | 47.4 | 0.633 |
| Tatoeba-test.ita-enm.ita.enm | 6.4 | 0.247 |
| Tatoeba-test.ita-fas.ita.fas | 4.2 | 0.236 |
| Tatoeba-test.ita-fra.ita.fra | 46.6 | 0.642 |
| Tatoeba-test.ita-fry.ita.fry | 20.0 | 0.409 |
| Tatoeba-test.ita-gla.ita.gla | 7.8 | 0.312 |
| Tatoeba-test.ita-glg.ita.glg | 36.3 | 0.577 |
| Tatoeba-test.ita-gos.ita.gos | 1.1 | 0.030 |
| Tatoeba-test.ita-hbs.ita.hbs | 39.4 | 0.595 |
| Tatoeba-test.ita-isl.ita.isl | 18.5 | 0.408 |
| Tatoeba-test.ita-kur.ita.kur | 1.9 | 0.160 |
| Tatoeba-test.ita-lad.ita.lad | 1.0 | 0.178 |
| Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.320 |
| Tatoeba-test.ita-lav.ita.lav | 29.0 | 0.511 |
| Tatoeba-test.ita-lij.ita.lij | 0.2 | 0.107 |
| Tatoeba-test.ita-lit.ita.lit | 20.7 | 0.475 |
| Tatoeba-test.ita-ltz.ita.ltz | 20.6 | 0.373 |
| Tatoeba-test.ita-msa.ita.msa | 14.3 | 0.409 |
| Tatoeba-test.ita-nds.ita.nds | 13.3 | 0.378 |
| Tatoeba-test.ita-nld.ita.nld | 37.8 | 0.578 |
| Tatoeba-test.ita-nor.ita.nor | 35.7 | 0.578 |
| Tatoeba-test.ita-oci.ita.oci | 11.0 | 0.369 |
| Tatoeba-test.ita-orv.ita.orv | 1.2 | 0.010 |
| Tatoeba-test.ita-pms.ita.pms | 0.2 | 0.110 |
| Tatoeba-test.ita-pol.ita.pol | 25.9 | 0.507 |
| Tatoeba-test.ita-por.ita.por | 36.8 | 0.597 |
| Tatoeba-test.ita-ron.ita.ron | 34.3 | 0.574 |
| Tatoeba-test.ita-rus.ita.rus | 28.5 | 0.494 |
| Tatoeba-test.ita-slv.ita.slv | 11.7 | 0.364 |
| Tatoeba-test.ita-spa.ita.spa | 46.3 | 0.653 |
| Tatoeba-test.ita-sqi.ita.sqi | 21.9 | 0.418 |
| Tatoeba-test.ita-swe.ita.swe | 37.7 | 0.562 |
| Tatoeba-test.ita-ukr.ita.ukr | 33.1 | 0.538 |
| Tatoeba-test.ita-vec.ita.vec | 0.8 | 0.095 |
| Tatoeba-test.ita-yid.ita.yid | 10.3 | 0.280 |
| Tatoeba-test.jdt-eng.jdt.eng | 3.9 | 0.098 |
| Tatoeba-test.kok-eng.kok.eng | 5.0 | 0.217 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.2 | 0.357 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.1 | 0.237 |
| Tatoeba-test.ksh-enm.ksh.enm | 5.3 | 0.299 |
| Tatoeba-test.ksh-fra.ksh.fra | 15.3 | 0.322 |
| Tatoeba-test.ksh-gla.ksh.gla | 0.0 | 0.095 |
| Tatoeba-test.ksh-spa.ksh.spa | 11.3 | 0.272 |
| Tatoeba-test.kur-ang.kur.ang | 0.0 | 0.069 |
| Tatoeba-test.kur-bel.kur.bel | 35.4 | 0.540 |
| Tatoeba-test.kur-dan.kur.dan | 24.3 | 0.509 |
| Tatoeba-test.kur-deu.kur.deu | 12.0 | 0.226 |
| Tatoeba-test.kur-eng.kur.eng | 10.0 | 0.205 |
| Tatoeba-test.kur-enm.kur.enm | 5.5 | 0.048 |
| Tatoeba-test.kur-fra.kur.fra | 16.5 | 0.236 |
| Tatoeba-test.kur-gle.kur.gle | 7.6 | 0.081 |
| Tatoeba-test.kur-gos.kur.gos | 1.6 | 0.013 |
| Tatoeba-test.kur-ita.kur.ita | 11.4 | 0.362 |
| Tatoeba-test.kur-lad.kur.lad | 0.2 | 0.067 |
| Tatoeba-test.kur-lat.kur.lat | 6.1 | 0.240 |
| Tatoeba-test.kur-lld.kur.lld | 1.9 | 0.161 |
| Tatoeba-test.kur-nld.kur.nld | 3.3 | 0.155 |
| Tatoeba-test.kur-nor.kur.nor | 31.9 | 0.184 |
| Tatoeba-test.kur-pol.kur.pol | 5.0 | 0.230 |
| Tatoeba-test.kur-por.kur.por | 37.0 | 0.295 |
| Tatoeba-test.kur-rus.kur.rus | 1.3 | 0.184 |
| Tatoeba-test.kur-spa.kur.spa | 39.1 | 0.426 |
| Tatoeba-test.kur-swe.kur.swe | 4.3 | 0.206 |
| Tatoeba-test.kur-yid.kur.yid | 2.1 | 0.164 |
| Tatoeba-test.lad-ang.lad.ang | 1.4 | 0.046 |
| Tatoeba-test.lad-bel.lad.bel | 9.7 | 0.330 |
| Tatoeba-test.lad-bul.lad.bul | 35.4 | 0.529 |
| Tatoeba-test.lad-ces.lad.ces | 33.1 | 0.604 |
| Tatoeba-test.lad-dan.lad.dan | 15.4 | 0.325 |
| Tatoeba-test.lad-deu.lad.deu | 19.3 | 0.405 |
| Tatoeba-test.lad-eng.lad.eng | 23.1 | 0.421 |
| Tatoeba-test.lad-enm.lad.enm | 2.2 | 0.173 |
| Tatoeba-test.lad-fas.lad.fas | 5.2 | 0.194 |
| Tatoeba-test.lad-fra.lad.fra | 26.3 | 0.405 |
| Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.170 |
| Tatoeba-test.lad-gle.lad.gle | 21.4 | 0.347 |
| Tatoeba-test.lad-gos.lad.gos | 1.2 | 0.058 |
| Tatoeba-test.lad-ita.lad.ita | 22.7 | 0.479 |
| Tatoeba-test.lad-kur.lad.kur | 2.4 | 0.190 |
| Tatoeba-test.lad-lat.lad.lat | 3.4 | 0.239 |
| Tatoeba-test.lad-ltz.lad.ltz | 45.5 | 0.580 |
| Tatoeba-test.lad-nds.lad.nds | 23.0 | 0.690 |
| Tatoeba-test.lad-nld.lad.nld | 33.5 | 0.449 |
| Tatoeba-test.lad-nor.lad.nor | 66.9 | 0.951 |
| Tatoeba-test.lad-pol.lad.pol | 0.0 | 0.076 |
| Tatoeba-test.lad-por.lad.por | 27.5 | 0.448 |
| Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.693 |
| Tatoeba-test.lad-rus.lad.rus | 6.5 | 0.308 |
| Tatoeba-test.lad-sco.lad.sco | 0.0 | 0.179 |
| Tatoeba-test.lad-slv.lad.slv | 59.5 | 0.602 |
| Tatoeba-test.lad-spa.lad.spa | 37.0 | 0.553 |
| Tatoeba-test.lad-swe.lad.swe | 66.9 | 0.783 |
| Tatoeba-test.lad-ukr.lad.ukr | 8.1 | 0.282 |
| Tatoeba-test.lad-yid.lad.yid | 4.8 | 0.212 |
| Tatoeba-test.lah-eng.lah.eng | 5.0 | 0.237 |
| Tatoeba-test.lat-afr.lat.afr | 100.0 | 1.000 |
| Tatoeba-test.lat-ang.lat.ang | 0.9 | 0.068 |
| Tatoeba-test.lat-bel.lat.bel | 10.6 | 0.284 |
| Tatoeba-test.lat-bul.lat.bul | 27.5 | 0.481 |
| Tatoeba-test.lat-ces.lat.ces | 15.6 | 0.331 |
| Tatoeba-test.lat-cym.lat.cym | 2.9 | 0.203 |
| Tatoeba-test.lat-dan.lat.dan | 29.4 | 0.479 |
| Tatoeba-test.lat-deu.lat.deu | 19.9 | 0.391 |
| Tatoeba-test.lat-eng.lat.eng | 20.5 | 0.396 |
| Tatoeba-test.lat-enm.lat.enm | 1.0 | 0.082 |
| Tatoeba-test.lat-fas.lat.fas | 7.9 | 0.407 |
| Tatoeba-test.lat-fra.lat.fra | 9.3 | 0.286 |
| Tatoeba-test.lat-fry.lat.fry | 7.1 | 0.192 |
| Tatoeba-test.lat-gos.lat.gos | 3.6 | 0.150 |
| Tatoeba-test.lat-grc.lat.grc | 0.2 | 0.001 |
| Tatoeba-test.lat-hbs.lat.hbs | 15.1 | 0.322 |
| Tatoeba-test.lat-isl.lat.isl | 8.3 | 0.108 |
| Tatoeba-test.lat-ita.lat.ita | 20.7 | 0.415 |
| Tatoeba-test.lat-kur.lat.kur | 7.9 | 0.260 |
| Tatoeba-test.lat-lad.lat.lad | 0.2 | 0.087 |
| Tatoeba-test.lat-lit.lat.lit | 5.6 | 0.301 |
| Tatoeba-test.lat-ltz.lat.ltz | 10.2 | 0.352 |
| Tatoeba-test.lat-nld.lat.nld | 24.3 | 0.444 |
| Tatoeba-test.lat-nor.lat.nor | 14.5 | 0.338 |
| Tatoeba-test.lat-orv.lat.orv | 0.1 | 0.006 |
| Tatoeba-test.lat-pol.lat.pol | 21.8 | 0.412 |
| Tatoeba-test.lat-por.lat.por | 12.2 | 0.336 |
| Tatoeba-test.lat-ron.lat.ron | 12.7 | 0.343 |
| Tatoeba-test.lat-rus.lat.rus | 16.6 | 0.362 |
| Tatoeba-test.lat-sco.lat.sco | 3.2 | 0.215 |
| Tatoeba-test.lat-spa.lat.spa | 18.9 | 0.414 |
| Tatoeba-test.lat-swe.lat.swe | 53.4 | 0.708 |
| Tatoeba-test.lat-ukr.lat.ukr | 14.0 | 0.343 |
| Tatoeba-test.lat-yid.lat.yid | 2.1 | 0.182 |
| Tatoeba-test.lav-dan.lav.dan | 100.0 | 1.000 |
| Tatoeba-test.lav-deu.lav.deu | 34.5 | 0.540 |
| Tatoeba-test.lav-eng.lav.eng | 33.6 | 0.520 |
| Tatoeba-test.lav-fra.lav.fra | 40.5 | 0.598 |
| Tatoeba-test.lav-isl.lav.isl | 72.7 | 0.770 |
| Tatoeba-test.lav-ita.lav.ita | 30.5 | 0.570 |
| Tatoeba-test.lav-lav.lav.lav | 5.7 | 0.362 |
| Tatoeba-test.lav-lit.lav.lit | 23.5 | 0.504 |
| Tatoeba-test.lav-mkd.lav.mkd | 13.7 | 0.550 |
| Tatoeba-test.lav-pol.lav.pol | 37.6 | 0.551 |
| Tatoeba-test.lav-rus.lav.rus | 32.5 | 0.517 |
| Tatoeba-test.lav-slv.lav.slv | 8.6 | 0.483 |
| Tatoeba-test.lav-spa.lav.spa | 26.6 | 0.511 |
| Tatoeba-test.lav-swe.lav.swe | 95.1 | 0.958 |
| Tatoeba-test.lav-ukr.lav.ukr | 9.0 | 0.488 |
| Tatoeba-test.lij-eng.lij.eng | 6.8 | 0.251 |
| Tatoeba-test.lij-fra.lij.fra | 12.2 | 0.329 |
| Tatoeba-test.lij-ita.lij.ita | 10.4 | 0.366 |
| Tatoeba-test.lit-deu.lit.deu | 25.7 | 0.472 |
| Tatoeba-test.lit-eng.lit.eng | 37.5 | 0.551 |
| Tatoeba-test.lit-fra.lit.fra | 32.1 | 0.489 |
| Tatoeba-test.lit-ita.lit.ita | 22.3 | 0.460 |
| Tatoeba-test.lit-lat.lit.lat | 7.4 | 0.195 |
| Tatoeba-test.lit-lav.lit.lav | 22.6 | 0.378 |
| Tatoeba-test.lit-mkd.lit.mkd | 9.7 | 0.282 |
| Tatoeba-test.lit-msa.lit.msa | 7.2 | 0.374 |
| Tatoeba-test.lit-pol.lit.pol | 30.9 | 0.529 |
| Tatoeba-test.lit-por.lit.por | 25.0 | 0.439 |
| Tatoeba-test.lit-rus.lit.rus | 30.6 | 0.504 |
| Tatoeba-test.lit-slv.lit.slv | 8.6 | 0.331 |
| Tatoeba-test.lit-spa.lit.spa | 32.9 | 0.516 |
| Tatoeba-test.lit-ukr.lit.ukr | 19.6 | 0.371 |
| Tatoeba-test.lit-yid.lit.yid | 6.5 | 0.360 |
| Tatoeba-test.lld-eng.lld.eng | 13.7 | 0.310 |
| Tatoeba-test.lld-fra.lld.fra | 13.1 | 0.368 |
| Tatoeba-test.lld-kur.lld.kur | 3.4 | 0.064 |
| Tatoeba-test.lld-spa.lld.spa | 9.3 | 0.351 |
| Tatoeba-test.lmo-eng.lmo.eng | 22.3 | 0.323 |
| Tatoeba-test.lmo-fra.lmo.fra | 10.9 | 0.333 |
| Tatoeba-test.ltz-afr.ltz.afr | 49.5 | 0.589 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.051 |
| Tatoeba-test.ltz-ces.ltz.ces | 9.7 | 0.353 |
| Tatoeba-test.ltz-dan.ltz.dan | 65.1 | 0.463 |
| Tatoeba-test.ltz-deu.ltz.deu | 35.6 | 0.533 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.7 | 0.448 |
| Tatoeba-test.ltz-fra.ltz.fra | 24.3 | 0.451 |
| Tatoeba-test.ltz-fry.ltz.fry | 23.4 | 0.621 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.5 | 0.104 |
| Tatoeba-test.ltz-ita.ltz.ita | 14.2 | 0.412 |
| Tatoeba-test.ltz-lad.ltz.lad | 7.8 | 0.179 |
| Tatoeba-test.ltz-lat.ltz.lat | 7.6 | 0.106 |
| Tatoeba-test.ltz-nld.ltz.nld | 32.4 | 0.488 |
| Tatoeba-test.ltz-nor.ltz.nor | 27.8 | 0.599 |
| Tatoeba-test.ltz-por.ltz.por | 12.7 | 0.319 |
| Tatoeba-test.ltz-rus.ltz.rus | 18.0 | 0.392 |
| Tatoeba-test.ltz-spa.ltz.spa | 15.6 | 0.458 |
| Tatoeba-test.ltz-stq.ltz.stq | 0.6 | 0.065 |
| Tatoeba-test.ltz-swe.ltz.swe | 32.5 | 0.403 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.4 | 0.236 |
| Tatoeba-test.mai-eng.mai.eng | 49.8 | 0.429 |
| Tatoeba-test.mai-spa.mai.spa | 18.6 | 0.460 |
| Tatoeba-test.mar-dan.mar.dan | 5.1 | 0.230 |
| Tatoeba-test.mar-deu.mar.deu | 14.2 | 0.379 |
| Tatoeba-test.mar-eng.mar.eng | 20.0 | 0.422 |
| Tatoeba-test.mar-fra.mar.fra | 40.7 | 0.470 |
| Tatoeba-test.mar-hin.mar.hin | 7.3 | 0.407 |
| Tatoeba-test.mar-rus.mar.rus | 35.4 | 0.638 |
| Tatoeba-test.mfe-eng.mfe.eng | 49.0 | 0.615 |
| Tatoeba-test.mkd-afr.mkd.afr | 42.7 | 0.655 |
| Tatoeba-test.mkd-bel.mkd.bel | 9.7 | 0.362 |
| Tatoeba-test.mkd-bul.mkd.bul | 61.6 | 0.819 |
| Tatoeba-test.mkd-ces.mkd.ces | 15.0 | 0.506 |
| Tatoeba-test.mkd-deu.mkd.deu | 31.0 | 0.548 |
| Tatoeba-test.mkd-eng.mkd.eng | 35.8 | 0.524 |
| Tatoeba-test.mkd-fra.mkd.fra | 30.2 | 0.486 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 32.5 | 0.589 |
| Tatoeba-test.mkd-lav.mkd.lav | 16.6 | 0.557 |
| Tatoeba-test.mkd-lit.mkd.lit | 11.6 | 0.395 |
| Tatoeba-test.mkd-nld.mkd.nld | 42.7 | 0.680 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-por.mkd.por | 10.1 | 0.492 |
| Tatoeba-test.mkd-ron.mkd.ron | 9.7 | 0.196 |
| Tatoeba-test.mkd-rus.mkd.rus | 24.7 | 0.727 |
| Tatoeba-test.mkd-spa.mkd.spa | 43.2 | 0.601 |
| Tatoeba-test.mkd-swe.mkd.swe | 23.6 | 0.361 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.864 |
| Tatoeba-test.msa-afr.msa.afr | 3.4 | 0.323 |
| Tatoeba-test.msa-bel.msa.bel | 17.1 | 0.418 |
| Tatoeba-test.msa-bre.msa.bre | 1.8 | 0.199 |
| Tatoeba-test.msa-bul.msa.bul | 11.9 | 0.258 |
| Tatoeba-test.msa-ces.msa.ces | 3.4 | 0.115 |
| Tatoeba-test.msa-cym.msa.cym | 0.0 | 0.000 |
| Tatoeba-test.msa-deu.msa.deu | 23.5 | 0.470 |
| Tatoeba-test.msa-ell.msa.ell | 19.7 | 0.490 |
| Tatoeba-test.msa-eng.msa.eng | 27.8 | 0.472 |
| Tatoeba-test.msa-fao.msa.fao | 2.0 | 0.232 |
| Tatoeba-test.msa-fas.msa.fas | 5.9 | 0.241 |
| Tatoeba-test.msa-fra.msa.fra | 25.9 | 0.465 |
| Tatoeba-test.msa-fry.msa.fry | 1.7 | 0.195 |
| Tatoeba-test.msa-isl.msa.isl | 3.4 | 0.228 |
| Tatoeba-test.msa-ita.msa.ita | 23.4 | 0.481 |
| Tatoeba-test.msa-lit.msa.lit | 11.5 | 0.304 |
| Tatoeba-test.msa-msa.msa.msa | 5.8 | 0.243 |
| Tatoeba-test.msa-nld.msa.nld | 20.9 | 0.442 |
| Tatoeba-test.msa-nor.msa.nor | 14.8 | 0.431 |
| Tatoeba-test.msa-pap.msa.pap | 83.8 | 0.946 |
| Tatoeba-test.msa-pol.msa.pol | 9.1 | 0.349 |
| Tatoeba-test.msa-por.msa.por | 15.4 | 0.385 |
| Tatoeba-test.msa-ron.msa.ron | 3.4 | 0.195 |
| Tatoeba-test.msa-rus.msa.rus | 18.8 | 0.401 |
| Tatoeba-test.msa-san.msa.san | 0.0 | 0.056 |
| Tatoeba-test.msa-spa.msa.spa | 22.6 | 0.451 |
| Tatoeba-test.msa-ukr.msa.ukr | 5.7 | 0.267 |
| Tatoeba-test.msa-urd.msa.urd | 8.0 | 0.102 |
| Tatoeba-test.multi.multi | 30.8 | 0.509 |
| Tatoeba-test.mwl-eng.mwl.eng | 22.8 | 0.416 |
| Tatoeba-test.mwl-enm.mwl.enm | 7.0 | 0.321 |
| Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 |
| Tatoeba-test.nds-ast.nds.ast | 42.7 | 0.835 |
| Tatoeba-test.nds-ces.nds.ces | 38.3 | 0.491 |
| Tatoeba-test.nds-dan.nds.dan | 18.5 | 0.399 |
| Tatoeba-test.nds-deu.nds.deu | 32.6 | 0.552 |
| Tatoeba-test.nds-ell.nds.ell | 18.1 | 0.426 |
| Tatoeba-test.nds-eng.nds.eng | 28.9 | 0.480 |
| Tatoeba-test.nds-enm.nds.enm | 6.9 | 0.198 |
| Tatoeba-test.nds-fas.nds.fas | 6.6 | 0.187 |
| Tatoeba-test.nds-fra.nds.fra | 31.9 | 0.498 |
| Tatoeba-test.nds-frr.nds.frr | 0.5 | 0.000 |
| Tatoeba-test.nds-fry.nds.fry | 0.0 | 0.023 |
| Tatoeba-test.nds-gos.nds.gos | 1.2 | 0.148 |
| Tatoeba-test.nds-ita.nds.ita | 28.5 | 0.505 |
| Tatoeba-test.nds-lad.nds.lad | 7.8 | 0.164 |
| Tatoeba-test.nds-nld.nds.nld | 38.2 | 0.584 |
| Tatoeba-test.nds-nor.nds.nor | 42.8 | 0.612 |
| Tatoeba-test.nds-pol.nds.pol | 15.3 | 0.405 |
| Tatoeba-test.nds-por.nds.por | 26.0 | 0.447 |
| Tatoeba-test.nds-ron.nds.ron | 0.0 | 0.353 |
| Tatoeba-test.nds-rus.nds.rus | 24.3 | 0.440 |
| Tatoeba-test.nds-spa.nds.spa | 31.7 | 0.527 |
| Tatoeba-test.nds-swg.nds.swg | 0.1 | 0.080 |
| Tatoeba-test.nds-ukr.nds.ukr | 20.1 | 0.464 |
| Tatoeba-test.nds-yid.nds.yid | 42.8 | 0.365 |
| Tatoeba-test.nep-eng.nep.eng | 2.1 | 0.161 |
| Tatoeba-test.nld-afr.nld.afr | 50.1 | 0.670 |
| Tatoeba-test.nld-ast.nld.ast | 42.7 | 0.835 |
| Tatoeba-test.nld-bel.nld.bel | 17.5 | 0.410 |
| Tatoeba-test.nld-bre.nld.bre | 3.2 | 0.189 |
| Tatoeba-test.nld-bul.nld.bul | 28.7 | 0.468 |
| Tatoeba-test.nld-cat.nld.cat | 31.9 | 0.546 |
| Tatoeba-test.nld-ces.nld.ces | 24.4 | 0.504 |
| Tatoeba-test.nld-cor.nld.cor | 0.6 | 0.048 |
| Tatoeba-test.nld-dan.nld.dan | 49.1 | 0.660 |
| Tatoeba-test.nld-deu.nld.deu | 38.3 | 0.589 |
| Tatoeba-test.nld-dsb.nld.dsb | 0.2 | 0.084 |
| Tatoeba-test.nld-ell.nld.ell | 35.3 | 0.528 |
| Tatoeba-test.nld-eng.nld.eng | 42.4 | 0.602 |
| Tatoeba-test.nld-enm.nld.enm | 6.1 | 0.269 |
| Tatoeba-test.nld-fas.nld.fas | 18.6 | 0.459 |
| Tatoeba-test.nld-fra.nld.fra | 35.7 | 0.549 |
| Tatoeba-test.nld-frr.nld.frr | 2.8 | 0.099 |
| Tatoeba-test.nld-fry.nld.fry | 19.2 | 0.438 |
| Tatoeba-test.nld-glg.nld.glg | 35.0 | 0.576 |
| Tatoeba-test.nld-gos.nld.gos | 0.5 | 0.129 |
| Tatoeba-test.nld-hat.nld.hat | 26.8 | 0.418 |
| Tatoeba-test.nld-ita.nld.ita | 35.3 | 0.580 |
| Tatoeba-test.nld-kur.nld.kur | 4.2 | 0.147 |
| Tatoeba-test.nld-lad.nld.lad | 0.7 | 0.101 |
| Tatoeba-test.nld-lat.nld.lat | 6.7 | 0.314 |
| Tatoeba-test.nld-ltz.nld.ltz | 17.6 | 0.384 |
| Tatoeba-test.nld-mkd.nld.mkd | 0.0 | 0.238 |
| Tatoeba-test.nld-msa.nld.msa | 3.6 | 0.210 |
| Tatoeba-test.nld-nds.nld.nds | 15.9 | 0.405 |
| Tatoeba-test.nld-nor.nld.nor | 42.4 | 0.618 |
| Tatoeba-test.nld-oci.nld.oci | 9.0 | 0.306 |
| Tatoeba-test.nld-pap.nld.pap | 38.9 | 0.531 |
| Tatoeba-test.nld-pol.nld.pol | 25.8 | 0.498 |
| Tatoeba-test.nld-por.nld.por | 31.7 | 0.535 |
| Tatoeba-test.nld-ron.nld.ron | 26.6 | 0.495 |
| Tatoeba-test.nld-rus.nld.rus | 30.0 | 0.512 |
| Tatoeba-test.nld-sco.nld.sco | 4.3 | 0.299 |
| Tatoeba-test.nld-spa.nld.spa | 35.0 | 0.560 |
| Tatoeba-test.nld-stq.nld.stq | 1.6 | 0.201 |
| Tatoeba-test.nld-swe.nld.swe | 72.2 | 0.801 |
| Tatoeba-test.nld-swg.nld.swg | 5.0 | 0.129 |
| Tatoeba-test.nld-ukr.nld.ukr | 26.2 | 0.481 |
| Tatoeba-test.nld-wln.nld.wln | 3.5 | 0.133 |
| Tatoeba-test.nld-yid.nld.yid | 11.5 | 0.293 |
| Tatoeba-test.non-eng.non.eng | 30.3 | 0.471 |
| Tatoeba-test.non-fra.non.fra | 90.1 | 0.839 |
| Tatoeba-test.nor-afr.nor.afr | 50.0 | 0.638 |
| Tatoeba-test.nor-bel.nor.bel | 42.2 | 0.467 |
| Tatoeba-test.nor-bre.nor.bre | 3.2 | 0.188 |
| Tatoeba-test.nor-bul.nor.bul | 35.4 | 0.529 |
| Tatoeba-test.nor-ces.nor.ces | 38.0 | 0.627 |
| Tatoeba-test.nor-cor.nor.cor | 3.2 | 0.072 |
| Tatoeba-test.nor-cym.nor.cym | 14.7 | 0.465 |
| Tatoeba-test.nor-dan.nor.dan | 59.0 | 0.757 |
| Tatoeba-test.nor-deu.nor.deu | 32.4 | 0.560 |
| Tatoeba-test.nor-ell.nor.ell | 29.9 | 0.507 |
| Tatoeba-test.nor-eng.nor.eng | 40.8 | 0.585 |
| Tatoeba-test.nor-enm.nor.enm | 4.2 | 0.303 |
| Tatoeba-test.nor-fao.nor.fao | 10.0 | 0.345 |
| Tatoeba-test.nor-fra.nor.fra | 38.4 | 0.572 |
| Tatoeba-test.nor-fry.nor.fry | 18.7 | 0.375 |
| Tatoeba-test.nor-got.nor.got | 10.7 | 0.015 |
| Tatoeba-test.nor-hbs.nor.hbs | 21.7 | 0.465 |
| Tatoeba-test.nor-hin.nor.hin | 14.8 | 0.307 |
| Tatoeba-test.nor-isl.nor.isl | 23.2 | 0.445 |
| Tatoeba-test.nor-ita.nor.ita | 35.2 | 0.594 |
| Tatoeba-test.nor-kur.nor.kur | 10.7 | 0.037 |
| Tatoeba-test.nor-lad.nor.lad | 6.6 | 0.370 |
| Tatoeba-test.nor-lat.nor.lat | 3.6 | 0.261 |
| Tatoeba-test.nor-ltz.nor.ltz | 12.2 | 0.404 |
| Tatoeba-test.nor-msa.nor.msa | 8.0 | 0.442 |
| Tatoeba-test.nor-nds.nor.nds | 20.3 | 0.466 |
| Tatoeba-test.nor-nld.nor.nld | 39.1 | 0.598 |
| Tatoeba-test.nor-nor.nor.nor | 49.0 | 0.698 |
| Tatoeba-test.nor-pol.nor.pol | 26.3 | 0.515 |
| Tatoeba-test.nor-por.nor.por | 31.0 | 0.543 |
| Tatoeba-test.nor-ron.nor.ron | 28.0 | 0.475 |
| Tatoeba-test.nor-rus.nor.rus | 28.1 | 0.513 |
| Tatoeba-test.nor-slv.nor.slv | 1.2 | 0.193 |
| Tatoeba-test.nor-spa.nor.spa | 38.2 | 0.598 |
| Tatoeba-test.nor-swe.nor.swe | 58.8 | 0.741 |
| Tatoeba-test.nor-ukr.nor.ukr | 29.1 | 0.515 |
| Tatoeba-test.nor-yid.nor.yid | 42.6 | 0.473 |
| Tatoeba-test.oci-deu.oci.deu | 11.2 | 0.346 |
| Tatoeba-test.oci-eng.oci.eng | 13.4 | 0.331 |
| Tatoeba-test.oci-enm.oci.enm | 5.3 | 0.206 |
| Tatoeba-test.oci-fra.oci.fra | 19.6 | 0.423 |
| Tatoeba-test.oci-ita.oci.ita | 24.5 | 0.493 |
| Tatoeba-test.oci-nld.oci.nld | 22.5 | 0.408 |
| Tatoeba-test.oci-pol.oci.pol | 8.8 | 0.322 |
| Tatoeba-test.oci-rus.oci.rus | 16.4 | 0.387 |
| Tatoeba-test.oci-spa.oci.spa | 20.4 | 0.442 |
| Tatoeba-test.oci-yid.oci.yid | 66.9 | 0.968 |
| Tatoeba-test.ori-eng.ori.eng | 3.9 | 0.168 |
| Tatoeba-test.ori-rus.ori.rus | 9.1 | 0.175 |
| Tatoeba-test.orv-deu.orv.deu | 5.8 | 0.256 |
| Tatoeba-test.orv-eng.orv.eng | 8.4 | 0.243 |
| Tatoeba-test.orv-fra.orv.fra | 8.9 | 0.244 |
| Tatoeba-test.orv-ita.orv.ita | 8.1 | 0.297 |
| Tatoeba-test.orv-lat.orv.lat | 1.2 | 0.207 |
| Tatoeba-test.orv-pol.orv.pol | 11.6 | 0.338 |
| Tatoeba-test.orv-rus.orv.rus | 8.2 | 0.234 |
| Tatoeba-test.orv-spa.orv.spa | 7.8 | 0.331 |
| Tatoeba-test.orv-ukr.orv.ukr | 6.4 | 0.217 |
| Tatoeba-test.oss-eng.oss.eng | 5.8 | 0.230 |
| Tatoeba-test.oss-fra.oss.fra | 10.8 | 0.279 |
| Tatoeba-test.oss-rus.oss.rus | 6.0 | 0.225 |
| Tatoeba-test.pan-eng.pan.eng | 6.1 | 0.256 |
| Tatoeba-test.pap-ell.pap.ell | 0.0 | 0.626 |
| Tatoeba-test.pap-eng.pap.eng | 45.7 | 0.586 |
| Tatoeba-test.pap-fra.pap.fra | 43.9 | 0.589 |
| Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.347 |
| Tatoeba-test.pap-nld.pap.nld | 41.9 | 0.587 |
| Tatoeba-test.pcd-fra.pcd.fra | 14.4 | 0.365 |
| Tatoeba-test.pcd-spa.pcd.spa | 5.8 | 0.274 |
| Tatoeba-test.pdc-deu.pdc.deu | 33.0 | 0.474 |
| Tatoeba-test.pdc-eng.pdc.eng | 36.1 | 0.479 |
| Tatoeba-test.pms-cos.pms.cos | 0.7 | 0.026 |
| Tatoeba-test.pms-deu.pms.deu | 13.1 | 0.310 |
| Tatoeba-test.pms-eng.pms.eng | 8.8 | 0.296 |
| Tatoeba-test.pms-fra.pms.fra | 13.0 | 0.309 |
| Tatoeba-test.pms-ita.pms.ita | 10.0 | 0.327 |
| Tatoeba-test.pms-pol.pms.pol | 15.2 | 0.304 |
| Tatoeba-test.pms-spa.pms.spa | 10.4 | 0.352 |
| Tatoeba-test.pol-afr.pol.afr | 40.2 | 0.589 |
| Tatoeba-test.pol-bel.pol.bel | 24.8 | 0.503 |
| Tatoeba-test.pol-bul.pol.bul | 29.4 | 0.508 |
| Tatoeba-test.pol-cat.pol.cat | 20.3 | 0.416 |
| Tatoeba-test.pol-ces.pol.ces | 28.0 | 0.489 |
| Tatoeba-test.pol-cor.pol.cor | 1.3 | 0.052 |
| Tatoeba-test.pol-cym.pol.cym | 7.0 | 0.347 |
| Tatoeba-test.pol-dan.pol.dan | 37.0 | 0.551 |
| Tatoeba-test.pol-deu.pol.deu | 29.1 | 0.508 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.8 | 0.070 |
| Tatoeba-test.pol-ell.pol.ell | 32.3 | 0.519 |
| Tatoeba-test.pol-eng.pol.eng | 34.1 | 0.531 |
| Tatoeba-test.pol-fao.pol.fao | 1.2 | 0.234 |
| Tatoeba-test.pol-fas.pol.fas | 6.5 | 0.208 |
| Tatoeba-test.pol-fra.pol.fra | 30.8 | 0.510 |
| Tatoeba-test.pol-fry.pol.fry | 7.2 | 0.287 |
| Tatoeba-test.pol-gla.pol.gla | 14.6 | 0.301 |
| Tatoeba-test.pol-glg.pol.glg | 18.4 | 0.498 |
| Tatoeba-test.pol-hbs.pol.hbs | 31.8 | 0.546 |
| Tatoeba-test.pol-hin.pol.hin | 3.5 | 0.193 |
| Tatoeba-test.pol-isl.pol.isl | 11.4 | 0.336 |
| Tatoeba-test.pol-ita.pol.ita | 28.5 | 0.522 |
| Tatoeba-test.pol-kur.pol.kur | 2.6 | 0.134 |
| Tatoeba-test.pol-lad.pol.lad | 16.0 | 0.265 |
| Tatoeba-test.pol-lat.pol.lat | 7.2 | 0.311 |
| Tatoeba-test.pol-lav.pol.lav | 22.9 | 0.450 |
| Tatoeba-test.pol-lit.pol.lit | 21.2 | 0.493 |
| Tatoeba-test.pol-mkd.pol.mkd | 38.0 | 0.718 |
| Tatoeba-test.pol-msa.pol.msa | 2.2 | 0.173 |
| Tatoeba-test.pol-nds.pol.nds | 14.4 | 0.370 |
| Tatoeba-test.pol-nld.pol.nld | 30.6 | 0.501 |
| Tatoeba-test.pol-nor.pol.nor | 33.3 | 0.536 |
| Tatoeba-test.pol-oci.pol.oci | 4.0 | 0.282 |
| Tatoeba-test.pol-orv.pol.orv | 0.4 | 0.005 |
| Tatoeba-test.pol-pms.pol.pms | 1.3 | 0.032 |
| Tatoeba-test.pol-por.pol.por | 25.9 | 0.491 |
| Tatoeba-test.pol-prg.pol.prg | 0.0 | 0.083 |
| Tatoeba-test.pol-ron.pol.ron | 26.5 | 0.487 |
| Tatoeba-test.pol-rus.pol.rus | 34.7 | 0.550 |
| Tatoeba-test.pol-slv.pol.slv | 7.4 | 0.256 |
| Tatoeba-test.pol-spa.pol.spa | 30.7 | 0.516 |
| Tatoeba-test.pol-swe.pol.swe | 35.0 | 0.530 |
| Tatoeba-test.pol-ukr.pol.ukr | 32.8 | 0.538 |
| Tatoeba-test.pol-urd.pol.urd | 5.6 | 0.381 |
| Tatoeba-test.pol-yid.pol.yid | 4.8 | 0.146 |
| Tatoeba-test.por-afr.por.afr | 48.1 | 0.653 |
| Tatoeba-test.por-ang.por.ang | 8.4 | 0.213 |
| Tatoeba-test.por-ast.por.ast | 42.7 | 0.835 |
| Tatoeba-test.por-bel.por.bel | 9.7 | 0.539 |
| Tatoeba-test.por-bul.por.bul | 41.5 | 0.569 |
| Tatoeba-test.por-cat.por.cat | 36.9 | 0.612 |
| Tatoeba-test.por-ces.por.ces | 29.0 | 0.526 |
| Tatoeba-test.por-cor.por.cor | 0.8 | 0.049 |
| Tatoeba-test.por-dan.por.dan | 51.4 | 0.668 |
| Tatoeba-test.por-deu.por.deu | 30.8 | 0.532 |
| Tatoeba-test.por-ell.por.ell | 33.8 | 0.556 |
| Tatoeba-test.por-eng.por.eng | 44.5 | 0.622 |
| Tatoeba-test.por-enm.por.enm | 10.7 | 0.190 |
| Tatoeba-test.por-fas.por.fas | 4.5 | 0.273 |
| Tatoeba-test.por-fra.por.fra | 43.0 | 0.625 |
| Tatoeba-test.por-fry.por.fry | 8.9 | 0.365 |
| Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.079 |
| Tatoeba-test.por-gla.por.gla | 12.1 | 0.315 |
| Tatoeba-test.por-glg.por.glg | 49.2 | 0.700 |
| Tatoeba-test.por-grc.por.grc | 0.1 | 0.004 |
| Tatoeba-test.por-hbs.por.hbs | 39.2 | 0.575 |
| Tatoeba-test.por-isl.por.isl | 15.5 | 0.387 |
| Tatoeba-test.por-ita.por.ita | 39.9 | 0.637 |
| Tatoeba-test.por-kur.por.kur | 3.0 | 0.133 |
| Tatoeba-test.por-lad.por.lad | 0.6 | 0.172 |
| Tatoeba-test.por-lat.por.lat | 5.4 | 0.325 |
| Tatoeba-test.por-lit.por.lit | 18.8 | 0.418 |
| Tatoeba-test.por-ltz.por.ltz | 16.8 | 0.569 |
| Tatoeba-test.por-mkd.por.mkd | 27.3 | 0.571 |
| Tatoeba-test.por-msa.por.msa | 7.6 | 0.327 |
| Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 |
| Tatoeba-test.por-nds.por.nds | 14.2 | 0.370 |
| Tatoeba-test.por-nld.por.nld | 35.6 | 0.558 |
| Tatoeba-test.por-nor.por.nor | 38.0 | 0.587 |
| Tatoeba-test.por-pol.por.pol | 25.5 | 0.510 |
| Tatoeba-test.por-roh.por.roh | 5.5 | 0.058 |
| Tatoeba-test.por-ron.por.ron | 32.0 | 0.557 |
| Tatoeba-test.por-rus.por.rus | 26.8 | 0.493 |
| Tatoeba-test.por-spa.por.spa | 48.7 | 0.686 |
| Tatoeba-test.por-swe.por.swe | 43.4 | 0.612 |
| Tatoeba-test.por-ukr.por.ukr | 27.5 | 0.500 |
| Tatoeba-test.por-yid.por.yid | 9.3 | 0.293 |
| Tatoeba-test.prg-deu.prg.deu | 2.2 | 0.183 |
| Tatoeba-test.prg-eng.prg.eng | 1.3 | 0.179 |
| Tatoeba-test.prg-fra.prg.fra | 2.3 | 0.183 |
| Tatoeba-test.prg-pol.prg.pol | 0.5 | 0.173 |
| Tatoeba-test.prg-spa.prg.spa | 3.4 | 0.200 |
| Tatoeba-test.pus-eng.pus.eng | 1.6 | 0.166 |
| Tatoeba-test.roh-deu.roh.deu | 8.3 | 0.311 |
| Tatoeba-test.roh-eng.roh.eng | 9.5 | 0.361 |
| Tatoeba-test.roh-fra.roh.fra | 8.8 | 0.415 |
| Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 |
| Tatoeba-test.roh-spa.roh.spa | 13.3 | 0.434 |
| Tatoeba-test.rom-deu.rom.deu | 2.9 | 0.204 |
| Tatoeba-test.rom-eng.rom.eng | 5.3 | 0.243 |
| Tatoeba-test.rom-fra.rom.fra | 6.5 | 0.194 |
| Tatoeba-test.ron-afr.ron.afr | 30.2 | 0.667 |
| Tatoeba-test.ron-bul.ron.bul | 35.4 | 0.493 |
| Tatoeba-test.ron-cat.ron.cat | 23.6 | 0.542 |
| Tatoeba-test.ron-ces.ron.ces | 10.6 | 0.344 |
| Tatoeba-test.ron-dan.ron.dan | 12.7 | 0.652 |
| Tatoeba-test.ron-deu.ron.deu | 32.1 | 0.524 |
| Tatoeba-test.ron-eng.ron.eng | 38.4 | 0.566 |
| Tatoeba-test.ron-enm.ron.enm | 5.3 | 0.351 |
| Tatoeba-test.ron-fas.ron.fas | 7.3 | 0.338 |
| Tatoeba-test.ron-fra.ron.fra | 38.0 | 0.571 |
| Tatoeba-test.ron-gle.ron.gle | 10.7 | 0.116 |
| Tatoeba-test.ron-ita.ron.ita | 36.2 | 0.587 |
| Tatoeba-test.ron-lad.ron.lad | 2.4 | 0.233 |
| Tatoeba-test.ron-lat.ron.lat | 6.5 | 0.368 |
| Tatoeba-test.ron-mkd.ron.mkd | 27.5 | 0.484 |
| Tatoeba-test.ron-msa.ron.msa | 0.8 | 0.082 |
| Tatoeba-test.ron-nds.ron.nds | 9.7 | 0.168 |
| Tatoeba-test.ron-nld.ron.nld | 32.5 | 0.522 |
| Tatoeba-test.ron-nor.ron.nor | 45.2 | 0.656 |
| Tatoeba-test.ron-pol.ron.pol | 32.2 | 0.554 |
| Tatoeba-test.ron-por.ron.por | 33.6 | 0.577 |
| Tatoeba-test.ron-rus.ron.rus | 33.3 | 0.536 |
| Tatoeba-test.ron-slv.ron.slv | 19.0 | 0.113 |
| Tatoeba-test.ron-spa.ron.spa | 40.8 | 0.605 |
| Tatoeba-test.ron-swe.ron.swe | 12.7 | 0.288 |
| Tatoeba-test.ron-yid.ron.yid | 19.7 | 0.285 |
| Tatoeba-test.rue-eng.rue.eng | 18.7 | 0.359 |
| Tatoeba-test.rue-spa.rue.spa | 30.1 | 0.455 |
| Tatoeba-test.rus-afr.rus.afr | 34.7 | 0.540 |
| Tatoeba-test.rus-ang.rus.ang | 0.0 | 0.042 |
| Tatoeba-test.rus-ast.rus.ast | 42.7 | 0.835 |
| Tatoeba-test.rus-bel.rus.bel | 35.0 | 0.587 |
| Tatoeba-test.rus-bul.rus.bul | 30.8 | 0.534 |
| Tatoeba-test.rus-cat.rus.cat | 27.9 | 0.512 |
| Tatoeba-test.rus-ces.rus.ces | 33.8 | 0.537 |
| Tatoeba-test.rus-cor.rus.cor | 0.4 | 0.038 |
| Tatoeba-test.rus-cym.rus.cym | 7.6 | 0.384 |
| Tatoeba-test.rus-dan.rus.dan | 37.9 | 0.559 |
| Tatoeba-test.rus-deu.rus.deu | 31.3 | 0.528 |
| Tatoeba-test.rus-dsb.rus.dsb | 16.0 | 0.060 |
| Tatoeba-test.rus-ell.rus.ell | 29.0 | 0.512 |
| Tatoeba-test.rus-eng.rus.eng | 37.6 | 0.553 |
| Tatoeba-test.rus-enm.rus.enm | 1.6 | 0.138 |
| Tatoeba-test.rus-fas.rus.fas | 4.2 | 0.278 |
| Tatoeba-test.rus-fra.rus.fra | 33.0 | 0.524 |
| Tatoeba-test.rus-fry.rus.fry | 16.3 | 0.308 |
| Tatoeba-test.rus-gcf.rus.gcf | 10.7 | 0.045 |
| Tatoeba-test.rus-gla.rus.gla | 22.3 | 0.427 |
| Tatoeba-test.rus-gle.rus.gle | 5.9 | 0.310 |
| Tatoeba-test.rus-glg.rus.glg | 20.6 | 0.459 |
| Tatoeba-test.rus-gos.rus.gos | 1.5 | 0.152 |
| Tatoeba-test.rus-hbs.rus.hbs | 31.0 | 0.546 |
| Tatoeba-test.rus-hin.rus.hin | 5.5 | 0.326 |
| Tatoeba-test.rus-hye.rus.hye | 12.7 | 0.365 |
| Tatoeba-test.rus-isl.rus.isl | 9.0 | 0.320 |
| Tatoeba-test.rus-ita.rus.ita | 26.6 | 0.495 |
| Tatoeba-test.rus-kur.rus.kur | 5.6 | 0.210 |
| Tatoeba-test.rus-lad.rus.lad | 1.0 | 0.169 |
| Tatoeba-test.rus-lat.rus.lat | 7.9 | 0.328 |
| Tatoeba-test.rus-lav.rus.lav | 31.1 | 0.519 |
| Tatoeba-test.rus-lit.rus.lit | 22.0 | 0.489 |
| Tatoeba-test.rus-ltz.rus.ltz | 19.4 | 0.263 |
| Tatoeba-test.rus-mar.rus.mar | 19.0 | 0.217 |
| Tatoeba-test.rus-mkd.rus.mkd | 38.5 | 0.662 |
| Tatoeba-test.rus-msa.rus.msa | 6.6 | 0.305 |
| Tatoeba-test.rus-nds.rus.nds | 11.5 | 0.350 |
| Tatoeba-test.rus-nld.rus.nld | 31.1 | 0.517 |
| Tatoeba-test.rus-nor.rus.nor | 31.2 | 0.528 |
| Tatoeba-test.rus-oci.rus.oci | 4.9 | 0.261 |
| Tatoeba-test.rus-ori.rus.ori | 7.3 | 0.325 |
| Tatoeba-test.rus-orv.rus.orv | 0.0 | 0.008 |
| Tatoeba-test.rus-oss.rus.oss | 4.8 | 0.198 |
| Tatoeba-test.rus-pol.rus.pol | 31.3 | 0.540 |
| Tatoeba-test.rus-por.rus.por | 24.5 | 0.476 |
| Tatoeba-test.rus-ron.rus.ron | 25.7 | 0.492 |
| Tatoeba-test.rus-slv.rus.slv | 20.7 | 0.400 |
| Tatoeba-test.rus-spa.rus.spa | 30.9 | 0.526 |
| Tatoeba-test.rus-swe.rus.swe | 32.0 | 0.507 |
| Tatoeba-test.rus-ukr.rus.ukr | 41.1 | 0.622 |
| Tatoeba-test.rus-urd.rus.urd | 7.1 | 0.367 |
| Tatoeba-test.rus-yid.rus.yid | 4.7 | 0.253 |
| Tatoeba-test.san-eng.san.eng | 2.5 | 0.167 |
| Tatoeba-test.san-msa.san.msa | 11.7 | 0.217 |
| Tatoeba-test.scn-deu.scn.deu | 3.9 | 0.224 |
| Tatoeba-test.scn-eng.scn.eng | 40.7 | 0.420 |
| Tatoeba-test.scn-fra.scn.fra | 2.1 | 0.134 |
| Tatoeba-test.scn-spa.scn.spa | 3.4 | 0.244 |
| Tatoeba-test.sco-deu.sco.deu | 17.2 | 0.310 |
| Tatoeba-test.sco-eng.sco.eng | 32.8 | 0.524 |
| Tatoeba-test.sco-fra.sco.fra | 5.7 | 0.254 |
| Tatoeba-test.sco-lad.sco.lad | 5.3 | 0.023 |
| Tatoeba-test.sco-lat.sco.lat | 3.5 | 0.237 |
| Tatoeba-test.sco-nld.sco.nld | 11.9 | 0.335 |
| Tatoeba-test.sgs-eng.sgs.eng | 23.7 | 0.300 |
| Tatoeba-test.sgs-spa.sgs.spa | 0.0 | 0.146 |
| Tatoeba-test.sin-eng.sin.eng | 14.1 | 0.313 |
| Tatoeba-test.slv-ces.slv.ces | 33.2 | 0.528 |
| Tatoeba-test.slv-deu.slv.deu | 33.4 | 0.518 |
| Tatoeba-test.slv-eng.slv.eng | 29.9 | 0.489 |
| Tatoeba-test.slv-fra.slv.fra | 19.5 | 0.405 |
| Tatoeba-test.slv-ita.slv.ita | 28.6 | 0.499 |
| Tatoeba-test.slv-lad.slv.lad | 5.5 | 0.296 |
| Tatoeba-test.slv-lav.slv.lav | 18.0 | 0.546 |
| Tatoeba-test.slv-lit.slv.lit | 18.0 | 0.452 |
| Tatoeba-test.slv-nor.slv.nor | 20.3 | 0.406 |
| Tatoeba-test.slv-pol.slv.pol | 33.1 | 0.541 |
| Tatoeba-test.slv-ron.slv.ron | 12.4 | 0.348 |
| Tatoeba-test.slv-rus.slv.rus | 33.4 | 0.519 |
| Tatoeba-test.slv-spa.slv.spa | 32.9 | 0.503 |
| Tatoeba-test.slv-swe.slv.swe | 14.8 | 0.095 |
| Tatoeba-test.slv-ukr.slv.ukr | 30.1 | 0.471 |
| Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.377 |
| Tatoeba-test.spa-afr.spa.afr | 46.9 | 0.624 |
| Tatoeba-test.spa-ang.spa.ang | 1.1 | 0.143 |
| Tatoeba-test.spa-arg.spa.arg | 21.6 | 0.446 |
| Tatoeba-test.spa-ast.spa.ast | 28.1 | 0.526 |
| Tatoeba-test.spa-bel.spa.bel | 22.8 | 0.466 |
| Tatoeba-test.spa-ben.spa.ben | 16.9 | 0.442 |
| Tatoeba-test.spa-bul.spa.bul | 30.8 | 0.510 |
| Tatoeba-test.spa-cat.spa.cat | 49.1 | 0.696 |
| Tatoeba-test.spa-ces.spa.ces | 27.2 | 0.497 |
| Tatoeba-test.spa-cor.spa.cor | 0.5 | 0.049 |
| Tatoeba-test.spa-csb.spa.csb | 5.3 | 0.204 |
| Tatoeba-test.spa-cym.spa.cym | 22.4 | 0.476 |
| Tatoeba-test.spa-dan.spa.dan | 39.3 | 0.581 |
| Tatoeba-test.spa-deu.spa.deu | 30.9 | 0.531 |
| Tatoeba-test.spa-dsb.spa.dsb | 0.7 | 0.109 |
| Tatoeba-test.spa-egl.spa.egl | 0.9 | 0.060 |
| Tatoeba-test.spa-ell.spa.ell | 28.9 | 0.487 |
| Tatoeba-test.spa-eng.spa.eng | 41.0 | 0.595 |
| Tatoeba-test.spa-enm.spa.enm | 13.9 | 0.188 |
| Tatoeba-test.spa-fas.spa.fas | 7.9 | 0.244 |
| Tatoeba-test.spa-fra.spa.fra | 41.4 | 0.610 |
| Tatoeba-test.spa-fry.spa.fry | 15.8 | 0.397 |
| Tatoeba-test.spa-gcf.spa.gcf | 7.0 | 0.060 |
| Tatoeba-test.spa-gla.spa.gla | 7.4 | 0.303 |
| Tatoeba-test.spa-gle.spa.gle | 22.2 | 0.415 |
| Tatoeba-test.spa-glg.spa.glg | 48.8 | 0.683 |
| Tatoeba-test.spa-gos.spa.gos | 1.7 | 0.181 |
| Tatoeba-test.spa-got.spa.got | 0.3 | 0.010 |
| Tatoeba-test.spa-grc.spa.grc | 0.1 | 0.005 |
| Tatoeba-test.spa-gsw.spa.gsw | 5.6 | 0.051 |
| Tatoeba-test.spa-guj.spa.guj | 15.0 | 0.365 |
| Tatoeba-test.spa-hat.spa.hat | 19.9 | 0.409 |
| Tatoeba-test.spa-hbs.spa.hbs | 33.2 | 0.529 |
| Tatoeba-test.spa-hin.spa.hin | 16.1 | 0.331 |
| Tatoeba-test.spa-hsb.spa.hsb | 5.1 | 0.240 |
| Tatoeba-test.spa-hye.spa.hye | 13.5 | 0.357 |
| Tatoeba-test.spa-isl.spa.isl | 18.0 | 0.410 |
| Tatoeba-test.spa-ita.spa.ita | 42.7 | 0.646 |
| Tatoeba-test.spa-ksh.spa.ksh | 0.4 | 0.088 |
| Tatoeba-test.spa-kur.spa.kur | 5.6 | 0.237 |
| Tatoeba-test.spa-lad.spa.lad | 0.9 | 0.157 |
| Tatoeba-test.spa-lat.spa.lat | 9.0 | 0.382 |
| Tatoeba-test.spa-lav.spa.lav | 23.7 | 0.510 |
| Tatoeba-test.spa-lit.spa.lit | 22.4 | 0.477 |
| Tatoeba-test.spa-lld.spa.lld | 0.4 | 0.119 |
| Tatoeba-test.spa-ltz.spa.ltz | 34.1 | 0.531 |
| Tatoeba-test.spa-mai.spa.mai | 29.4 | 0.416 |
| Tatoeba-test.spa-mkd.spa.mkd | 37.1 | 0.568 |
| Tatoeba-test.spa-msa.spa.msa | 14.0 | 0.405 |
| Tatoeba-test.spa-nds.spa.nds | 15.4 | 0.390 |
| Tatoeba-test.spa-nld.spa.nld | 34.0 | 0.550 |
| Tatoeba-test.spa-nor.spa.nor | 41.1 | 0.608 |
| Tatoeba-test.spa-oci.spa.oci | 8.0 | 0.353 |
| Tatoeba-test.spa-orv.spa.orv | 0.4 | 0.010 |
| Tatoeba-test.spa-pcd.spa.pcd | 0.2 | 0.060 |
| Tatoeba-test.spa-pms.spa.pms | 0.6 | 0.122 |
| Tatoeba-test.spa-pol.spa.pol | 26.3 | 0.498 |
| Tatoeba-test.spa-por.spa.por | 41.6 | 0.638 |
| Tatoeba-test.spa-prg.spa.prg | 0.3 | 0.095 |
| Tatoeba-test.spa-roh.spa.roh | 4.0 | 0.219 |
| Tatoeba-test.spa-ron.spa.ron | 31.9 | 0.550 |
| Tatoeba-test.spa-rue.spa.rue | 0.2 | 0.013 |
| Tatoeba-test.spa-rus.spa.rus | 29.4 | 0.510 |
| Tatoeba-test.spa-scn.spa.scn | 1.6 | 0.086 |
| Tatoeba-test.spa-sgs.spa.sgs | 16.0 | 0.111 |
| Tatoeba-test.spa-slv.spa.slv | 9.2 | 0.269 |
| Tatoeba-test.spa-stq.spa.stq | 8.4 | 0.375 |
| Tatoeba-test.spa-swe.spa.swe | 39.5 | 0.572 |
| Tatoeba-test.spa-ukr.spa.ukr | 27.8 | 0.495 |
| Tatoeba-test.spa-wln.spa.wln | 2.9 | 0.220 |
| Tatoeba-test.spa-yid.spa.yid | 10.0 | 0.296 |
| Tatoeba-test.sqi-eng.sqi.eng | 30.9 | 0.499 |
| Tatoeba-test.sqi-fra.sqi.fra | 29.9 | 0.545 |
| Tatoeba-test.sqi-ita.sqi.ita | 24.5 | 0.484 |
| Tatoeba-test.srd-fra.srd.fra | 5.8 | 0.347 |
| Tatoeba-test.stq-deu.stq.deu | 16.7 | 0.426 |
| Tatoeba-test.stq-eng.stq.eng | 8.4 | 0.370 |
| Tatoeba-test.stq-frr.stq.frr | 0.6 | 0.032 |
| Tatoeba-test.stq-fry.stq.fry | 9.3 | 0.283 |
| Tatoeba-test.stq-gos.stq.gos | 0.3 | 0.126 |
| Tatoeba-test.stq-isl.stq.isl | 0.0 | 0.102 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.0 | 0.175 |
| Tatoeba-test.stq-nld.stq.nld | 13.2 | 0.398 |
| Tatoeba-test.stq-spa.stq.spa | 7.0 | 0.345 |
| Tatoeba-test.stq-yid.stq.yid | 5.0 | 0.110 |
| Tatoeba-test.swe-afr.swe.afr | 63.1 | 0.831 |
| Tatoeba-test.swe-bul.swe.bul | 35.4 | 0.529 |
| Tatoeba-test.swe-cat.swe.cat | 38.5 | 0.528 |
| Tatoeba-test.swe-ces.swe.ces | 32.8 | 0.380 |
| Tatoeba-test.swe-dan.swe.dan | 54.5 | 0.702 |
| Tatoeba-test.swe-deu.swe.deu | 36.7 | 0.570 |
| Tatoeba-test.swe-ell.swe.ell | 32.9 | 0.541 |
| Tatoeba-test.swe-eng.swe.eng | 44.9 | 0.606 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.877 |
| Tatoeba-test.swe-fra.swe.fra | 43.2 | 0.605 |
| Tatoeba-test.swe-fry.swe.fry | 42.7 | 0.402 |
| Tatoeba-test.swe-gos.swe.gos | 4.8 | 0.253 |
| Tatoeba-test.swe-hbs.swe.hbs | 39.3 | 0.591 |
| Tatoeba-test.swe-hin.swe.hin | 31.6 | 0.617 |
| Tatoeba-test.swe-isl.swe.isl | 21.2 | 0.559 |
| Tatoeba-test.swe-ita.swe.ita | 33.1 | 0.548 |
| Tatoeba-test.swe-kur.swe.kur | 1.4 | 0.144 |
| Tatoeba-test.swe-lad.swe.lad | 6.6 | 0.373 |
| Tatoeba-test.swe-lat.swe.lat | 4.5 | 0.453 |
| Tatoeba-test.swe-lav.swe.lav | 73.4 | 0.828 |
| Tatoeba-test.swe-ltz.swe.ltz | 25.5 | 0.440 |
| Tatoeba-test.swe-mkd.swe.mkd | 0.0 | 0.124 |
| Tatoeba-test.swe-nld.swe.nld | 71.9 | 0.742 |
| Tatoeba-test.swe-nor.swe.nor | 59.5 | 0.742 |
| Tatoeba-test.swe-pol.swe.pol | 25.9 | 0.497 |
| Tatoeba-test.swe-por.swe.por | 31.3 | 0.546 |
| Tatoeba-test.swe-ron.swe.ron | 100.0 | 1.000 |
| Tatoeba-test.swe-rus.swe.rus | 28.6 | 0.495 |
| Tatoeba-test.swe-slv.swe.slv | 19.0 | 0.116 |
| Tatoeba-test.swe-spa.swe.spa | 37.1 | 0.569 |
| Tatoeba-test.swe-yid.swe.yid | 13.9 | 0.336 |
| Tatoeba-test.swg-ces.swg.ces | 16.5 | 0.438 |
| Tatoeba-test.swg-dan.swg.dan | 20.1 | 0.468 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.316 |
| Tatoeba-test.swg-eng.swg.eng | 13.0 | 0.300 |
| Tatoeba-test.swg-fra.swg.fra | 15.3 | 0.296 |
| Tatoeba-test.swg-nds.swg.nds | 0.9 | 0.199 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.287 |
| Tatoeba-test.swg-yid.swg.yid | 1.9 | 0.194 |
| Tatoeba-test.tgk-deu.tgk.deu | 45.2 | 0.574 |
| Tatoeba-test.tgk-eng.tgk.eng | 7.8 | 0.271 |
| Tatoeba-test.tgk-fra.tgk.fra | 9.6 | 0.273 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.102 |
| Tatoeba-test.tly-fra.tly.fra | 4.4 | 0.054 |
| Tatoeba-test.ukr-afr.ukr.afr | 48.3 | 0.646 |
| Tatoeba-test.ukr-ang.ukr.ang | 1.4 | 0.034 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.7 | 0.601 |
| Tatoeba-test.ukr-bul.ukr.bul | 40.4 | 0.601 |
| Tatoeba-test.ukr-cat.ukr.cat | 33.9 | 0.538 |
| Tatoeba-test.ukr-ces.ukr.ces | 33.1 | 0.524 |
| Tatoeba-test.ukr-dan.ukr.dan | 25.8 | 0.469 |
| Tatoeba-test.ukr-deu.ukr.deu | 34.0 | 0.543 |
| Tatoeba-test.ukr-ell.ukr.ell | 23.0 | 0.493 |
| Tatoeba-test.ukr-eng.ukr.eng | 36.1 | 0.538 |
| Tatoeba-test.ukr-enm.ukr.enm | 3.6 | 0.400 |
| Tatoeba-test.ukr-fas.ukr.fas | 5.3 | 0.240 |
| Tatoeba-test.ukr-fra.ukr.fra | 32.0 | 0.519 |
| Tatoeba-test.ukr-fry.ukr.fry | 13.6 | 0.318 |
| Tatoeba-test.ukr-gos.ukr.gos | 3.8 | 0.199 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 33.4 | 0.547 |
| Tatoeba-test.ukr-ita.ukr.ita | 32.6 | 0.546 |
| Tatoeba-test.ukr-lad.ukr.lad | 1.4 | 0.166 |
| Tatoeba-test.ukr-lat.ukr.lat | 8.0 | 0.314 |
| Tatoeba-test.ukr-lav.ukr.lav | 10.7 | 0.520 |
| Tatoeba-test.ukr-lit.ukr.lit | 59.9 | 0.631 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 38.0 | 0.718 |
| Tatoeba-test.ukr-msa.ukr.msa | 2.5 | 0.213 |
| Tatoeba-test.ukr-nds.ukr.nds | 11.0 | 0.368 |
| Tatoeba-test.ukr-nld.ukr.nld | 33.0 | 0.524 |
| Tatoeba-test.ukr-nor.ukr.nor | 40.4 | 0.574 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.1 | 0.008 |
| Tatoeba-test.ukr-pol.ukr.pol | 32.7 | 0.553 |
| Tatoeba-test.ukr-por.ukr.por | 26.8 | 0.496 |
| Tatoeba-test.ukr-rus.ukr.rus | 45.7 | 0.651 |
| Tatoeba-test.ukr-slv.ukr.slv | 11.8 | 0.263 |
| Tatoeba-test.ukr-spa.ukr.spa | 31.7 | 0.528 |
| Tatoeba-test.ukr-yid.ukr.yid | 3.6 | 0.196 |
| Tatoeba-test.urd-dan.urd.dan | 36.7 | 0.586 |
| Tatoeba-test.urd-deu.urd.deu | 17.1 | 0.451 |
| Tatoeba-test.urd-eng.urd.eng | 17.1 | 0.375 |
| Tatoeba-test.urd-fra.urd.fra | 38.1 | 0.565 |
| Tatoeba-test.urd-hbs.urd.hbs | 0.0 | 1.000 |
| Tatoeba-test.urd-hin.urd.hin | 14.0 | 0.404 |
| Tatoeba-test.urd-msa.urd.msa | 1.5 | 0.014 |
| Tatoeba-test.urd-pol.urd.pol | 68.7 | 0.695 |
| Tatoeba-test.urd-rus.urd.rus | 25.8 | 0.314 |
| Tatoeba-test.vec-eng.vec.eng | 13.6 | 0.319 |
| Tatoeba-test.vec-fra.vec.fra | 48.3 | 0.680 |
| Tatoeba-test.vec-ita.vec.ita | 28.3 | 0.454 |
| Tatoeba-test.wln-eng.wln.eng | 4.4 | 0.206 |
| Tatoeba-test.wln-fra.wln.fra | 8.0 | 0.282 |
| Tatoeba-test.wln-nld.wln.nld | 5.2 | 0.237 |
| Tatoeba-test.wln-spa.wln.spa | 9.9 | 0.395 |
| Tatoeba-test.yid-afr.yid.afr | 35.4 | 0.868 |
| Tatoeba-test.yid-ang.yid.ang | 0.8 | 0.077 |
| Tatoeba-test.yid-bel.yid.bel | 4.9 | 0.240 |
| Tatoeba-test.yid-bul.yid.bul | 11.3 | 0.054 |
| Tatoeba-test.yid-cat.yid.cat | 19.0 | 0.583 |
| Tatoeba-test.yid-ces.yid.ces | 5.4 | 0.320 |
| Tatoeba-test.yid-cym.yid.cym | 6.3 | 0.239 |
| Tatoeba-test.yid-dan.yid.dan | 12.8 | 0.341 |
| Tatoeba-test.yid-deu.yid.deu | 17.5 | 0.382 |
| Tatoeba-test.yid-ell.yid.ell | 42.7 | 0.797 |
| Tatoeba-test.yid-eng.yid.eng | 15.5 | 0.338 |
| Tatoeba-test.yid-enm.yid.enm | 2.3 | 0.176 |
| Tatoeba-test.yid-fas.yid.fas | 4.5 | 0.207 |
| Tatoeba-test.yid-fra.yid.fra | 18.9 | 0.367 |
| Tatoeba-test.yid-fry.yid.fry | 6.0 | 0.156 |
| Tatoeba-test.yid-gle.yid.gle | 32.2 | 0.448 |
| Tatoeba-test.yid-gos.yid.gos | 1.3 | 0.142 |
| Tatoeba-test.yid-ita.yid.ita | 15.3 | 0.363 |
| Tatoeba-test.yid-kur.yid.kur | 3.2 | 0.166 |
| Tatoeba-test.yid-lad.yid.lad | 0.1 | 0.090 |
| Tatoeba-test.yid-lat.yid.lat | 1.8 | 0.206 |
| Tatoeba-test.yid-lit.yid.lit | 27.8 | 0.560 |
| Tatoeba-test.yid-ltz.yid.ltz | 4.2 | 0.316 |
| Tatoeba-test.yid-nds.yid.nds | 24.6 | 0.466 |
| Tatoeba-test.yid-nld.yid.nld | 24.5 | 0.431 |
| Tatoeba-test.yid-nor.yid.nor | 5.0 | 0.318 |
| Tatoeba-test.yid-oci.yid.oci | 19.0 | 0.390 |
| Tatoeba-test.yid-pol.yid.pol | 15.0 | 0.258 |
| Tatoeba-test.yid-por.yid.por | 7.4 | 0.326 |
| Tatoeba-test.yid-ron.yid.ron | 12.3 | 0.325 |
| Tatoeba-test.yid-rus.yid.rus | 14.2 | 0.324 |
| Tatoeba-test.yid-spa.yid.spa | 16.1 | 0.369 |
| Tatoeba-test.yid-stq.yid.stq | 3.2 | 0.125 |
| Tatoeba-test.yid-swe.yid.swe | 55.9 | 0.672 |
| Tatoeba-test.yid-swg.yid.swg | 0.3 | 0.083 |
| Tatoeba-test.yid-ukr.yid.ukr | 7.2 | 0.383 |
| Tatoeba-test.zza-asm.zza.asm | 0.0 | 0.102 |
| Tatoeba-test.zza-eng.zza.eng | 1.9 | 0.135 |
### System Info:
- hf_name: ine-ine
- source_languages: ine
- target_languages: ine
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-ine/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.test.txt
- src_alpha3: ine
- tgt_alpha3: ine
- short_pair: ine-ine
- chrF2_score: 0.509
- bleu: 30.8
- brevity_penalty: 0.9890000000000001
- ref_len: 69953.0
- src_name: Indo-European languages
- tgt_name: Indo-European languages
- train_date: 2020-07-27
- src_alpha2: ine
- tgt_alpha2: ine
- prefer_old: False
- long_pair: ine-ine
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-mkh-en | 325cb2363f097b102d7599b518ba64d8bf98de3a | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"km",
"mkh",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mkh-en | 26 | null | transformers | 7,514 | ---
language:
- vi
- km
- mkh
- en
tags:
- translation
license: apache-2.0
---
### mkh-eng
* source group: Mon-Khmer languages
* target group: English
* OPUS readme: [mkh-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md)
* model: transformer
* source language(s): kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kha-eng.kha.eng | 0.5 | 0.108 |
| Tatoeba-test.khm-eng.khm.eng | 8.5 | 0.206 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.7 | 0.110 |
| Tatoeba-test.multi.eng | 24.5 | 0.407 |
| Tatoeba-test.vie-eng.vie.eng | 34.4 | 0.529 |
### System Info:
- hf_name: mkh-eng
- source_languages: mkh
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'mkh', 'en']
- src_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt
- src_alpha3: mkh
- tgt_alpha3: eng
- short_pair: mkh-en
- chrF2_score: 0.40700000000000003
- bleu: 24.5
- brevity_penalty: 1.0
- ref_len: 33985.0
- src_name: Mon-Khmer languages
- tgt_name: English
- train_date: 2020-07-27
- src_alpha2: mkh
- tgt_alpha2: en
- prefer_old: False
- long_pair: mkh-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nl-fi | 9c2749217bb778e6d77a7bffba719d98a27c7f10 | 2021-09-10T13:59:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nl-fi | 26 | null | transformers | 7,515 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-fi
* source languages: nl
* target languages: fi
* OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nl.fi | 28.6 | 0.569 |
|
Helsinki-NLP/opus-mt-sn-en | 122bc773e49e14db353cea778090a95ce2e20f6c | 2021-09-10T14:04:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sn",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sn-en | 26 | null | transformers | 7,516 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sn-en
* source languages: sn
* target languages: en
* OPUS readme: [sn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.en | 51.8 | 0.648 |
|
Helsinki-NLP/opus-mt-xh-en | 6a5f51b69435fc8f618c0b9c1711d6dd322c5661 | 2021-09-11T10:52:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"xh",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-xh-en | 26 | null | transformers | 7,517 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-xh-en
* source languages: xh
* target languages: en
* OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.en | 45.8 | 0.610 |
|
Helsinki-NLP/opus-tatoeba-fr-it | ece0ee5246a0e21bba190007872250a79cc262bd | 2021-11-11T17:41:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-fr-it | 26 | null | transformers | 7,518 | ---
language:
- fr
- it
tags:
- translation
license: apache-2.0
---
### fr-it
* source group: French
* target group: Italian
* OPUS readme: [fra-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ita
* raw source language(s): fra
* raw target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807-2021-11-11.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip)
* test set translations: [opusTCv20210807-2021-11-11.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt)
* test set scores: [opusTCv20210807-2021-11-11.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.fra-ita | 54.8 | 0.737 | 10000 | 61517 | 0.953 |
### System Info:
- hf_name: fr-it
- source_languages: fra
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'it']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt
- src_alpha3: fra
- tgt_alpha3: ita
- chrF2_score: 0.737
- bleu: 54.8
- src_name: French
- tgt_name: Italian
- train_date: 2021-11-11 00:00:00
- src_alpha2: fr
- tgt_alpha2: it
- prefer_old: False
- short_pair: fr-it
- helsinki_git_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-11-19:40 |
KoichiYasuoka/bert-large-japanese-upos | 45d90ca233f9496c9aacfaa6407e510fb7901122 | 2022-05-23T21:51:21.000Z | [
"pytorch",
"bert",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-japanese-upos | 26 | 1 | transformers | 7,519 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# bert-large-japanese-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
Kowsher/bert-base-bangla-ner | 185e29349c9687fa704c12f8d9a5dd494a422b08 | 2021-08-08T10:35:26.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Kowsher | null | Kowsher/bert-base-bangla-ner | 26 | null | transformers | 7,520 | Entry not found |
NLPC-UOM/SinBERT-small | f0eaaed69eaba28a4f98eaa31b92713c5c01e1db | 2022-04-29T05:04:13.000Z | [
"pytorch",
"roberta",
"fill-mask",
"si",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | NLPC-UOM | null | NLPC-UOM/SinBERT-small | 26 | 1 | transformers | 7,521 | ---
license: mit
language:
- si
---
This is SinBERT-small model. SinBERT models are pretrained on a large Sinhala monolingual corpus (sin-cc-15M) using RoBERTa. If you use this model, please cite *BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, LREC 2022*
|
NYTK/text-generation-poem-petofi-gpt2-small-hungarian | d338bd8974e927955d676045a95980cde2d21d66 | 2022-02-14T13:34:21.000Z | [
"pytorch",
"gpt2",
"text-generation",
"hu",
"transformers",
"license:gpl"
] | text-generation | false | NYTK | null | NYTK/text-generation-poem-petofi-gpt2-small-hungarian | 26 | 1 | transformers | 7,522 | ---
language:
- hu
tags:
- text-generation
license: gpl
widget:
- text: "Szegeden, január végén,"
---
# Hungarian GPT-2 poem generator in Petőfi style
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained on Hungarian Wikipedia
- Finetuned on Petőfi Sándor összes költeményei
## Results
| Model | Perplexity |
| ------------- | ------------- |
| **GPT-2 poem** | **47.46** |
| GPT-2 news | 22.06 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-gpt2,
title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {{Yang Zijian Győző}},
pages = {463--476}
}
```
|
SEBIS/code_trans_t5_small_commit_generation_multitask_finetune | ccb87cb56e45b0e940a1753c219bc82d1e3dd320 | 2021-06-23T10:15:17.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_commit_generation_multitask_finetune | 26 | null | transformers | 7,523 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 8,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_trans_en_sv_small_finetuned | 59e08a5da640a659ab998f79b390e2289602c01b | 2021-06-23T09:40:43.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_en_sv_small_finetuned | 26 | null | transformers | 7,524 |
---
language: English Swedish
tags:
- translation English Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "any operations cofinanced in the framework of"
---
# legal_t5_small_trans_en_sv_small_finetuned model
Model on translating legal text from English to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_en_sv_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_en_sv_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Swedish.
### How to use
Here is how to use this model to translate legal text from English to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_en_sv_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_en_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "any operations cofinanced in the framework of"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_trans_en_sv_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_en_sv_small_finetuned | 48.126|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
ShengdingHu/sst2 | 894b64b74eab3740d4e91840a826f939c2e6baf7 | 2022-04-26T11:16:23.000Z | [
"pytorch",
"tensorboard",
"big_bird",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ShengdingHu | null | ShengdingHu/sst2 | 26 | null | transformers | 7,525 | Entry not found |
addy88/programming-lang-identifier | 8b13668b138d4dbd1cce7d5febc4261bcdd7cf24 | 2022-01-04T04:22:07.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | addy88 | null | addy88/programming-lang-identifier | 26 | null | transformers | 7,526 | This model is funetune version of Codebert in roberta. On CodeSearchNet.
###
Quick start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("addy88/programming-lang-identifier")
model = AutoModelForSequenceClassification.from_pretrained("addy88/programming-lang-identifier")
input_ids = tokenizer.encode(CODE_TO_IDENTIFY)
logits = model(input_ids)[0]
language_idx = logits.argmax() # index for the resulting label
### |
akdeniz27/mDeBERTa-v3-base-turkish-ner | 0548ce8e7f7ddcc165e12cd9cfcac01a6490fbbf | 2021-11-25T20:32:19.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"tr",
"transformers",
"autotrain_compatible"
] | token-classification | false | akdeniz27 | null | akdeniz27/mDeBERTa-v3-base-turkish-ner | 26 | null | transformers | 7,527 | ---
language: tr
widget:
- text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
---
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "microsoft/mDeBERTa-v3-base"
(a multilingual version of DeBERTa V3)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "microsoft/mdeberta-v3-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* f1: 0.95
* precision: 0.94
* recall: 0.96 |
allenai/hvila-block-layoutlm-finetuned-grotoap2 | c2c2e944ea28883b5a8184d76354e53c6064b83d | 2021-09-27T22:59:48.000Z | [
"pytorch",
"hierarchical_model",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | allenai | null | allenai/hvila-block-layoutlm-finetuned-grotoap2 | 26 | null | transformers | 7,528 | Entry not found |
anuragshas/wav2vec2-large-xlsr-as | d69474818224e8ecf85d09954eb0079467587ad0 | 2022-01-14T16:41:25.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-large-xlsr-as | 26 | null | transformers | 7,529 | ---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Anurag Singh XLSR Wav2Vec2 Large 53 Assamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 69.63
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\”\\়\\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.63 %
## Training
The Common Voice `train` and `validation` datasets were used for training. |
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2 | fc07afdd922e42e34c67464e895d5a0e4f2565e8 | 2022-02-02T15:16:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | arjuntheprogrammer | null | arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2 | 26 | null | transformers | 7,530 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.7614
- name: F1
type: f1
value: 0.7614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa | ff8dd3f1de9be2cd3cf57783ae27f9972a55ede8 | 2021-12-22T10:33:50.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | ayameRushia | null | ayameRushia/roberta-base-indonesian-sentiment-analysis-smsa | 26 | null | transformers | 7,531 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9349206349206349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-sentiment-analysis-smsa
This model is a fine-tuned version of [flax-community/indonesian-roberta-base](https://huggingface.co/flax-community/indonesian-roberta-base) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4252
- Accuracy: 0.9349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7582 | 1.0 | 688 | 0.3280 | 0.8786 |
| 0.3225 | 2.0 | 1376 | 0.2398 | 0.9206 |
| 0.2057 | 3.0 | 2064 | 0.2574 | 0.9230 |
| 0.1642 | 4.0 | 2752 | 0.2820 | 0.9302 |
| 0.1266 | 5.0 | 3440 | 0.3344 | 0.9317 |
| 0.0608 | 6.0 | 4128 | 0.3543 | 0.9341 |
| 0.058 | 7.0 | 4816 | 0.4252 | 0.9349 |
| 0.0315 | 8.0 | 5504 | 0.4736 | 0.9310 |
| 0.0166 | 9.0 | 6192 | 0.4649 | 0.9349 |
| 0.0143 | 10.0 | 6880 | 0.4648 | 0.9341 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bakrianoo/t5-arabic-large | f60d15333498962977d518ec27331d35bc17fdbf | 2021-06-26T17:09:24.000Z | [
"pytorch",
"t5",
"text2text-generation",
"Arabic",
"dataset:mc4",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | bakrianoo | null | bakrianoo/t5-arabic-large | 26 | null | transformers | 7,532 | ---
language: Arabic
datasets:
- mc4
license: apache-2.0
---
## Arabic T5 Large Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-large` model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
```
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
```
[Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
|
bipin/malayalam-news-classifier | b14c5e159c1811bcaec8bd213142493252cc4f94 | 2021-07-21T13:40:25.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"malayalam",
"license:mit"
] | text-classification | false | bipin | null | bipin/malayalam-news-classifier | 26 | 2 | transformers | 7,533 | ---
license: mit
tags:
- text-classification
- roberta
- malayalam
- pytorch
widget:
- text: "2032 ഒളിമ്പിക്സിന് ബ്രിസ്ബെയ്ന് വേദിയാകും; ഗെയിംസിന് വേദിയാകുന്ന മൂന്നാമത്തെ ഓസ്ട്രേലിയന് നഗരം"
---
## Malayalam news classifier
### Overview
This model is trained on top of [MalayalamBert](https://huggingface.co/eliasedwin7/MalayalamBERT) for the task of classifying malayalam news headlines. Presently, the following news categories are supported:
* Business
* Sports
* Entertainment
### Dataset
The dataset used for training this model can be found [here](https://www.kaggle.com/disisbig/malyalam-news-dataset).
### Using the model with HF pipeline
```python
from transformers import pipeline
news_headline = "ക്രിപ്റ്റോ ഇടപാടുകളുടെ വിവരങ്ങൾ ആവശ്യപ്പെട്ട് ആദായനികുതി വകുപ്പ് നോട്ടീസയച്ചു"
model = pipeline(task="text-classification", model="bipin/malayalam-news-classifier")
model(news_headline)
# Output
# [{'label': 'business', 'score': 0.9979357123374939}]
```
### Contact
For feedback and questions, feel free to contact via twitter [@bkrish_](https://twitter.com/bkrish_) |
cl-tohoku/roberta-base-japanese | 626ec58f01e6aa050dde737d1e5f41654c89e489 | 2021-09-21T09:31:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cl-tohoku | null | cl-tohoku/roberta-base-japanese | 26 | null | transformers | 7,534 | Entry not found |
codesj/empathic-concern | be3878da3f8bf9d739dc51d19e54cb360a8116d6 | 2021-11-15T15:10:47.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | codesj | null | codesj/empathic-concern | 26 | null | transformers | 7,535 | Entry not found |
daekeun-ml/koelectra-small-v3-nsmc | 7d03233da5e3fefe54ed4eb20d9d94d45d180fe1 | 2022-02-13T06:22:54.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"dataset:nsmc",
"transformers",
"classification",
"license:mit"
] | text-classification | false | daekeun-ml | null | daekeun-ml/koelectra-small-v3-nsmc | 26 | null | transformers | 7,536 | ---
language:
- ko
tags:
- classification
license: mit
datasets:
- nsmc
metrics:
- accuracy
- f1
- precision
- recall- accuracy
---
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_nsmc.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
classes = ['Neg', 'Pos']
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def model_fn(model_path=None):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
# Download model from the Huggingface hub
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"][0]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
softmax_fn = nn.Softmax(dim=1)
softmax_output = softmax_fn(output[0])
_, prediction = torch.max(softmax_output, dim=1)
predicted_class_idx = prediction.item()
predicted_class = classes[predicted_class_idx]
score = softmax_output[0][predicted_class_idx]
logger.info("predicted_class: {}".format(predicted_class))
prediction_dict = {}
prediction_dict["predicted_label"] = predicted_class
prediction_dict['score'] = score.cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn
>>> with open('samples/nsmc.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다
[{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다
[{inference_nsmc.py:77} INFO - predicted_class: Pos
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613}
[{inference_nsmc.py:77} INFO - predicted_class: Neg
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967}
{"predicted_label": "Pos", "score": 0.9619030952453613}
{"predicted_label": "Neg", "score": 0.9994170665740967}
```
### Sample data (samples/nsmc.txt)
```
{"text": ["이 영화는 최고의 영화입니다"]}
{"text": ["최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다"]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc |
dbmdz/electra-base-turkish-cased-generator | d743f2f14112ced2d7ecd9cd3a6eb623b67be35c | 2020-05-12T11:54:58.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/electra-base-turkish-cased-generator | 26 | null | transformers | 7,537 | Entry not found |
dropout05/t5-tiny | a078917e9be9c9c653ddc8397b5a61c1cc0a1012 | 2022-02-02T19:11:43.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | dropout05 | null | dropout05/t5-tiny | 26 | null | transformers | 7,538 | ---
license: apache-2.0
---
|
eunjin/kogpt2-finetuned-wellness | d8f79be7e2971828a2a269453927649c8ce0d6dd | 2021-06-10T12:32:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | eunjin | null | eunjin/kogpt2-finetuned-wellness | 26 | null | transformers | 7,539 | * skt/kogpt2-base-v2에 wellness 및 일상챗봇 데이터를 fine-tuning한 모델입니다.
* 유사한 정신건강 상담 도메인에서 바로 사용 가능합니다.
* 깃허브 사이트를 참조해주세요! https://github.com/eunjiinkim/WellnessChatbot |
ffsouza/tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro | 050c773cc8312ff52f0780ed148623cc63d00c79 | 2021-11-30T16:02:14.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"dataset:wmt16_en_ro_pre_processed",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ffsouza | null | ffsouza/tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro | 26 | null | transformers | 7,540 | ---
tags:
- generated_from_trainer
datasets:
- wmt16_en_ro_pre_processed
metrics:
- bleu
model-index:
- name: tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16_en_ro_pre_processed
type: wmt16_en_ro_pre_processed
args: enro
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4656
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2268 | 1.0 | 76290 | 8.4656 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
gfdgdfgdg/arap_qa_bert_large_v2 | ecda3a63abebf8bf9df8b8369037996bf910f8c9 | 2021-08-09T12:52:24.000Z | [
"pytorch",
"bert",
"question-answering",
"ar",
"transformers",
"autotrain_compatible"
] | question-answering | false | gfdgdfgdg | null | gfdgdfgdg/arap_qa_bert_large_v2 | 26 | null | transformers | 7,541 | ---
language:
- ar
widget:
- text: "أين يعيش محمد ؟"
context: "اسمي محمد وأنا أعيش في سوريا"
- text: "ما العدد الذري للهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
- text: "ما خواص الهيدروجين ؟"
context: "الهيدروجين هو عنصر كيميائي عدده الذري 1 ، وهو غاز عديم الرائحة واللون وهو سريع الاشتعال"
---
|
google/t5-11b-ssm-wq | 91862905ed9515c5e86f1d5dfcc2c529212ecdb5 | 2020-12-07T08:46:12.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:web_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-11b-ssm-wq | 26 | 1 | transformers | 7,542 | ---
language: en
datasets:
- c4
- wikipedia
- web_questions
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions).
**Note**: The model was fine-tuned on 100% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 10k steps.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Web Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wq**|**44.7**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wq|43.5|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
huggingtweets/14werewolfvevo | e109cae8b231744821655e7a2ea9adc36c2cdb52 | 2021-05-21T16:28:48.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/14werewolfvevo | 26 | null | transformers | 7,543 | ---
language: en
thumbnail: https://www.huggingtweets.com/14werewolfvevo/1617769919321/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1343113335882063873/mITxI5OI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">SIKA MODE | BLM 🤖 AI Bot </div>
<div style="font-size: 15px">@14werewolfvevo bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@14werewolfvevo's tweets](https://twitter.com/14werewolfvevo).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 170 |
| Short tweets | 798 |
| Tweets kept | 2261 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ymsdw3a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @14werewolfvevo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1iypm80s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1iypm80s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/14werewolfvevo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/davidgoggins | 122f2d567287b77bfa57a6e29239934505404315 | 2021-05-22T00:53:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/davidgoggins | 26 | null | transformers | 7,544 | ---
language: en
thumbnail: https://www.huggingtweets.com/davidgoggins/1603830361250/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/792165528752140288/liCCmoI2_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">David Goggins 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@davidgoggins bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@davidgoggins's tweets](https://twitter.com/davidgoggins).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>557</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>10</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>75</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>472</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3bgqr5vh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @davidgoggins's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/13i4mcyp) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/13i4mcyp/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/davidgoggins'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
huggingtweets/jschlatt | 878fb0fe0d8e787668214110f723d0c186fad9c3 | 2021-09-23T19:13:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/jschlatt | 26 | null | transformers | 7,545 | ---
language: en
thumbnail: https://www.huggingtweets.com/jschlatt/1632424426297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1104281298967904257/KuDWZQfF_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Schlatt</div>
<div style="text-align: center; font-size: 14px;">@jschlatt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Schlatt.
| Data | Schlatt |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 3 |
| Short tweets | 1207 |
| Tweets kept | 2040 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ad6fl7e4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jschlatt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24kxtuwd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24kxtuwd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jschlatt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marsajal | 81cf502acb44411e859f7e6ef7da1775e4fc19df | 2022-07-07T09:42:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/marsajal | 26 | null | transformers | 7,546 | ---
language: en
thumbnail: http://www.huggingtweets.com/marsajal/1657186931820/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1463196823728771079/wZc0m7cd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ajeng🦦</div>
<div style="text-align: center; font-size: 14px;">@marsajal</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ajeng🦦.
| Data | ajeng🦦 |
| --- | --- |
| Tweets downloaded | 214 |
| Retweets | 37 |
| Short tweets | 41 |
| Tweets kept | 136 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kdiymty/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marsajal's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lfk0v9ey) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lfk0v9ey/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marsajal')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sexycuckolding | d31fe2d73e60b70cf63dd5326c88631aba96a6f4 | 2021-08-14T12:11:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sexycuckolding | 26 | null | transformers | 7,547 | ---
language: en
thumbnail: https://www.huggingtweets.com/sexycuckolding/1628943086648/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1392455809330819072/POjhVAU1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cuckolding (female perspective)</div>
<div style="text-align: center; font-size: 14px;">@sexycuckolding</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cuckolding (female perspective).
| Data | Cuckolding (female perspective) |
| --- | --- |
| Tweets downloaded | 2651 |
| Retweets | 364 |
| Short tweets | 311 |
| Tweets kept | 1976 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/120lf3ey/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sexycuckolding's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gmuegp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gmuegp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sexycuckolding')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/starbannergames | 9252d512ad61e38345affc7583503e0eb8fb6b4f | 2021-05-22T23:54:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/starbannergames | 26 | null | transformers | 7,548 | ---
language: en
thumbnail: https://www.huggingtweets.com/starbannergames/1616902434636/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364669962351243273/0wP1cOJ4_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nanda //// Star Banner Games 🤖 AI Bot </div>
<div style="font-size: 15px">@starbannergames bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@starbannergames's tweets](https://twitter.com/starbannergames).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 990 |
| Retweets | 134 |
| Short tweets | 97 |
| Tweets kept | 759 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39zshs8e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @starbannergames's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/292aokzw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/292aokzw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/starbannergames')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ylecun | 3575ada0ee67c8e05347c9f043fe2fa99722d57b | 2021-05-23T05:03:08.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/ylecun | 26 | null | transformers | 7,549 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2387565623/7gew8nz1z7ik1ch148so_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Yann LeCun 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@ylecun bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@ylecun's tweets](https://twitter.com/ylecun).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3230</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>968</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>245</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2017</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/3a9fwpf1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ylecun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/avykhi3y) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/avykhi3y/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/ylecun'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
ishan/distilbert-base-uncased-mnli | 5b5436f6f59086b00ac829afecc16d1bd926cbfb | 2020-08-21T10:23:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:MNLI",
"arxiv:1810.04805",
"transformers"
] | text-classification | false | ishan | null | ishan/distilbert-base-uncased-mnli | 26 | null | transformers | 7,550 | ---
language: en
thumbnail:
tags:
- pytorch
- text-classification
datasets:
- MNLI
---
# distilbert-base-uncased finetuned on MNLI
## Model Details and Training Data
We used the pretrained model from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and finetuned it on [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset.
The training parameters were kept the same as [Devlin et al., 2019](https://arxiv.org/abs/1810.04805) (learning rate = 2e-5, training epochs = 3, max_sequence_len = 128 and batch_size = 32).
## Evaluation Results
The evaluation results are mentioned in the table below.
| Test Corpus | Accuracy |
|:---:|:---------:|
| Matched | 0.8223 |
| Mismatched | 0.8216 |
|
jaketae/hifigan-lj-v1 | 85caaf4ed15cfb83ba79a994a2266aa892645495 | 2022-02-23T23:22:01.000Z | [
"pytorch",
"hifigan",
"en",
"dataset:ljspeech",
"arxiv:2010.05646",
"transformers",
"audio",
"text-to-speech"
] | text-to-speech | false | jaketae | null | jaketae/hifigan-lj-v1 | 26 | null | transformers | 7,551 | ---
language: en
datasets:
- ljspeech
tags:
- audio
- text-to-speech
---
# HiFi-GAN
[HiFi-GAN](https://arxiv.org/abs/2010.05646) vocoder trained on the [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/). The modeling code is based on the [official implementation](https://github.com/jik876/hifi-gan) and the [fairseq adaptation](https://github.com/pytorch/fairseq).
## Usage
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("jaketae/hifigan-lj-v1", trust_remote_code=True)
```
|
justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets | 5d61a1b0771e7816cb449f526c93f554ba632926 | 2021-12-12T20:00:43.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | justinqbui | null | justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets | 26 | null | transformers | 7,552 | ---
tags:
- generated_from_trainer
model-index:
- name: bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets
This model is a further pre-trained version of [vinai/bertweet-covid19-base-uncased](https://huggingface.co/vinai/bertweet-covid19-base-uncased) on masked language modeling using [a kaggle dataset](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets) with tweets up until early December.
It achieves the following results on the evaluation set (15% from the dataset randomly selected to serve as a test set):
- Loss: 1.5089
- Perplexity: 4.64
To use the model, use the inference API.
Alternatively, to run locally
```
from transformers import pipeline
model = "justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets"
pipe = pipeline("fill-mask", model = model)
seq = "covid vaccines are <mask> and effective"
pipe(seq)
```
## Model description
This model is a further pretrained version of bertweet, which both follow objectives in the [RoBERTa paper](https://arxiv.org/pdf/1907.11692.pdf). While bertweet was only trained with 23M tweets until September, 2020, this model was further pre-trained using 300k tweets with #CovidVaccine.
The tokenizer requires the emoji library to be installed.
```
!pip install nltk emoji
```
## Intended uses & limitations
The intended use of this model is for fine-tuning on a downstream task on tasks that are closely related to covid and covid vaccines. This model has many potential biases and limitations, since the model is trained on public tweets, it is bound to recreate biases that people tweet.
In order to load the model and tokenizer, run
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets")
model = AutoModelForMaskedLM.from_pretrained("justinqbui/bertweet-covid19-base-uncased-pretraining-covid-vaccine-tweets")
```
## Training and evaluation data
This model was further pre-trained on 300k tweets containing #covidvaccines from this [kaggle dataset](https://www.kaggle.com/kaushiksuresh147/covidvaccine-tweets). The evaluation set was 15% of the tweets that were held out from the training data.
## Training procedure
See the training notebook found [here]().
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5775 | 1.0 | 8931 | 1.5852 |
| 1.5715 | 2.0 | 17862 | 1.5701 |
| 1.5394 | 3.0 | 26793 | 1.5089 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kmfoda/wav2vec2-large-xlsr-arabic | cd5511440ff945978f812dc85c8c410e9ca12cdb | 2021-07-06T09:45:10.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | kmfoda | null | kmfoda/wav2vec2-large-xlsr-arabic | 26 | null | transformers | 7,553 | ---
language: ar
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Arabic by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 46.77
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\؟\_\؛\ـ\—]'
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.53
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://huggingface.co/kmfoda/wav2vec2-large-xlsr-arabic/tree/main) |
kuppuluri/telugu_bertu_tydiqa | b67e93cd5ae0fb5165ca2ed88023cf66d898963f | 2021-12-02T18:15:25.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | kuppuluri | null | kuppuluri/telugu_bertu_tydiqa | 26 | null | transformers | 7,554 | # Telugu Question-Answering model trained on Tydiqa dataset from Google
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
```python
from transformers.pipelines import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu_tydiqa",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question, 'context': context})
```
## Training data
I used Tydiqa Telugu data from Google https://github.com/google-research-datasets/tydiqa
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
|
maroo93/squad2.0 | 2f0cb49fb8a12dfa44fd52589874eaacb8a45dfd | 2021-05-19T23:09:45.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | maroo93 | null | maroo93/squad2.0 | 26 | null | transformers | 7,555 | Entry not found |
mlcorelib/debertav2-base-uncased | 55519d4c151b1a15fd62273a084a7313a251e27e | 2021-05-01T12:53:51.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | mlcorelib | null | mlcorelib/debertav2-base-uncased | 26 | null | transformers | 7,556 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
monologg/koelectra-v3-klue-sts | cf2810bfb9a91714e9c3b20dfa171ef9adf77770 | 2022-01-25T09:13:15.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | monologg | null | monologg/koelectra-v3-klue-sts | 26 | null | transformers | 7,557 | Entry not found |
mustapha/distilgpt2-finetuned-wikitext2 | 76f63a314a775840ffaaa10ee03e0e615a386388 | 2021-11-30T09:52:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | mustapha | null | mustapha/distilgpt2-finetuned-wikitext2 | 26 | 1 | transformers | 7,558 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
patrickvonplaten/sew-d-small-100k-timit | 3ba41fac89042fbac19b762eb3cbc42db3703e16 | 2021-10-27T17:15:26.000Z | [
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/sew-d-small-100k-timit | 26 | null | transformers | 7,559 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7541
- Wer: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7718 | 4.14 | 600 | 2.7787 | 1.0 |
| 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 |
| 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 |
| 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 |
| 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 |
| 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 |
| 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 |
| 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 |
| 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 |
| 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 |
| 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 |
| 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 |
| 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 |
| 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 |
| 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 |
| 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 |
| 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 |
| 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 |
| 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 |
| 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 |
| 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 |
| 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 |
| 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 |
| 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
rajratnpranesh/DCS_sanskrit_bert | 69b6d784189fdd3176e2087303afaee66e828eda | 2021-05-20T03:52:51.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | rajratnpranesh | null | rajratnpranesh/DCS_sanskrit_bert | 26 | null | transformers | 7,560 | Entry not found |
shahrukhx01/roberta-base-squad2-boolq-baseline | ad3bde67e7d2489e15d519fadbeeae733ee91659 | 2021-09-28T18:18:26.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | shahrukhx01 | null | shahrukhx01/roberta-base-squad2-boolq-baseline | 26 | null | transformers | 7,561 | ## Multiple Prediction Heads
* ExtractiveQA Head
* Three Class Classification Head, classes => (yes, no, extra_qa) to answer binary questions or direct to ExtractiveQA Head
## BoolQ Validation dataset Evaluation: <br/>
support => 3270 <br/>
accuracy => 0.73 <br/>
macro f1 => 0.71
## SQuAD Validation dataset Evaluation: <br/>
eval_HasAns_exact = 78.0196 <br/>
eval_HasAns_f1 = 84.0327 <br/>
eval_HasAns_total = 5928 <br/>
eval_NoAns_exact = 81.8167 <br/>
eval_NoAns_f1 = 81.8167 <br/>
eval_NoAns_total = 5945 <br/>
eval_best_exact = 79.9208 <br/>
eval_best_f1 = 82.9231 <br/>
eval_exact = 79.9208 <br/>
eval_f1 = 82.9231 <br/>
eval_samples = 12165 <br/>
eval_total = 11873
## Uasge in transformers
Import the script from [here](https://huggingface.co/shahrukhx01/roberta-base-squad2-boolq-baseline/blob/main/multitask_model.py)
```python
from multitask_model import RobertaForMultitaskQA
from transformers import RobertaTokenizerFast
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = RobertaForMultitaskQA.from_pretrained(
"shahrukhx01/roberta-base-squad2-boolq-baseline",
task_labels_map={"squad_v2": 2, "boolq": 3},
).to(device)
tokenizer = RobertaTokenizerFast.from_pretrained("shahrukhx01/roberta-base-squad2-boolq-baseline")
``` |
sultan/BioM-ELECTRA-Large-Generator | 4be3b63c6e32aaafeed9e1877a8f8b683a3a56d0 | 2021-05-24T21:07:58.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | sultan | null | sultan/BioM-ELECTRA-Large-Generator | 26 | null | transformers | 7,562 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 434K steps with a batch size of 4096 on TPUv3-512 unit.
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
tbrasil/classificador_de_atendimento_3_classes_v1.1 | c7825e99fb9b49e3f7b6ef33f020f799ac24568d | 2021-07-26T17:26:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tbrasil | null | tbrasil/classificador_de_atendimento_3_classes_v1.1 | 26 | null | transformers | 7,563 | Entry not found |
yoshitomo-matsubara/bert-large-uncased-qnli | 6bf3fa14095da060773362e19be89bd7db46b4ca | 2021-05-29T21:33:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qnli",
"transformers",
"qnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-qnli | 26 | null | transformers | 7,564 | ---
language: en
tags:
- bert
- qnli
- glue
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on QNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
gustavecortal/gpt-j-fr-covid-news | 397ecac9686c226e98af93c9b986e75be3510905 | 2022-03-10T10:05:27.000Z | [
"pytorch",
"gptj",
"text-generation",
"fr",
"dataset:gustavecortal/fr_covid_news",
"transformers",
"causal-lm",
"license:mit"
] | text-generation | false | gustavecortal | null | gustavecortal/gpt-j-fr-covid-news | 26 | 1 | transformers | 7,565 | ---
language: fr
license: mit
tags:
- causal-lm
- fr
datasets:
- gustavecortal/fr_covid_news
---
### GPT-J COVID-19 French News with 8-bit weights
This is a version of Cedille's GPT-J ([fr-boris](https://huggingface.co/gustavecortal/fr-boris-8bit)) with 6 billion parameters fine-tuned on [COVID-19 French News dataset](https://huggingface.co/datasets/gustavecortal/fr_covid_news) to generate French headlines related to COVID-19.
You can generate the model in colab or equivalent desktop gpu (e.g. single 1080Ti) as the model has 8-bit weights. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
This model can be easily loaded using the `GPTJForCausalLM` functionality:
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("gustavecortal/gpt-j-fr-covid-news")
```
Remember, you have to Monkey-Patch the model before loading it (see Colab above).
## One thousand AI-generated French headlines related to COVID-19
How not to be disoriented in a pandemic era when faced with an immense flow of information? [This page](https://gustavecortal.com/project/covid) features one thousand AI-generated French headlines related to COVID-19.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset.
## Links
* [Gustave Cortal](https://twitter.com/gustavecortal) |
aicryptogroup/distill-xlm-mrc | 41ba30c18793cb527db62b68b47c9e0881e25a4a | 2022-04-26T02:40:42.000Z | [
"pytorch",
"roberta",
"question-answering",
"vi",
"vn",
"en",
"dataset:squad",
"transformers",
"autotrain_compatible"
] | question-answering | false | aicryptogroup | null | aicryptogroup/distill-xlm-mrc | 26 | null | transformers | 7,566 | ---
language:
- vi
- vn
- en
tags:
- question-answering
- pytorch
datasets:
- squad
pipeline_tag: question-answering
metrics:
- squad
widget:
- text: "what is the capital of Vietnam ?"
context: "Keeping an ageless charm through centuries, Hanoi - the capital of Vietnam is famous not only for the Old Quarter with narrow and crowded streets but also for the nostalgic feeling that it brings. While Saigon is a young and modern city, the ancient Hanoi is still a true beholder of history."
---
```python
from transformers import pipeline
model_checkpoint = "aicryptogroup/distill-xlm-mrc"
nlp = pipeline('question-answering', model=model_checkpoint,
tokenizer=model_checkpoint)
QA_input = {
'question': "what is the capital of Vietnam",
'context': "Keeping an ageless charm through centuries, Hanoi - the capital of Vietnam is famous not only for the Old Quarter with narrow and crowded streets but also for the nostalgic feeling that it brings. While Saigon is a young and modern city, the ancient Hanoi is still a true beholder of history."
}
res = nlp(QA_input)
print('pipeline: {}'.format(res))
|
BigSalmon/MASKGPT2 | 273e2105628f5a4ef264b86ee582bbd088c705a6 | 2022-03-23T19:26:53.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/MASKGPT2 | 26 | null | transformers | 7,567 | ```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
gastronomia-para-to2/gastronomia_para_to2 | 71f14c3b2d495242c2d94d31a2714b6589b7c1c0 | 2022-06-23T14:55:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"es",
"transformers",
"generated_from_trainer",
"recipe-generation"
] | text-generation | false | gastronomia-para-to2 | null | gastronomia-para-to2/gastronomia_para_to2 | 26 | 1 | transformers | 7,568 | ---
language:
- es
tags:
- generated_from_trainer
- recipe-generation
widget:
- text: "<RECIPE_START> <INPUT_START> salmón <NEXT_INPUT> zumo de naranja <NEXT_INPUT> aceite de oliva <NEXT_INPUT> sal <NEXT_INPUT> pimienta <INPUT_END> <INGR_START>"
- text: "<RECIPE_START> <INPUT_START> harina <NEXT_INPUT> azúcar <NEXT_INPUT> huevos <NEXT_INPUT> chocolate <NEXT_INPUT> levadura Royal <INPUT_END> <INGR_START>"
inference:
parameters:
top_k: 50
top_p: 0.92
do_sample: True
num_return_sequences: 3
max_new_tokens: 100
---
# Model description
This model is a fine-tuned version of [flax-community/gpt-2-spanish](https://huggingface.co/flax-community/gpt-2-spanish) on a custom dataset (not publicly available). The dataset is made of crawled data from 3 Spanish cooking websites and it contains approximately ~50000 recipes.
It achieves the following results on the evaluation set:
- Loss: 0.5796
## Contributors
- Julián Cendrero ([jucendrero](https://huggingface.co/jucendrero))
- Silvia Duque ([silBERTa](https://huggingface.co/silBERTa))
## How to use it
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_checkpoint = 'gastronomia-para-to2/gastronomia_para_to2'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForCausalLM.from_pretrained(model_checkpoint)
```
The tokenizer makes use of the following special tokens to indicate the structure of the recipe:
```python
special_tokens = [
'<INPUT_START>',
'<NEXT_INPUT>',
'<INPUT_END>',
'<TITLE_START>',
'<TITLE_END>',
'<INGR_START>',
'<NEXT_INGR>',
'<INGR_END>',
'<INSTR_START>',
'<NEXT_INSTR>',
'<INSTR_END>',
'<RECIPE_START>',
'<RECIPE_END>']
```
The input should be of the form:
```python
<RECIPE_START> <INPUT_START> ingredient_1 <NEXT_INPUT> ingredient_2 <NEXT_INPUT> ... <NEXT_INPUT> ingredient_n <INPUT_END> <INGR_START>
```
We are using the following configuration to generate recipes, but feel free to change parameters as needed:
```python
tokenized_input = tokenizer(input, return_tensors='pt')
output = model.generate(**tokenized_input,
max_length=600,
do_sample=True,
top_p=0.92,
top_k=50,
num_return_sequences=3)
pre_output = tokenizer.decode(output[0], skip_special_tokens=False)
```
The recipe ends where the \<RECIPE_END\> special token appears for the first time.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6213 | 1.0 | 5897 | 0.6214 |
| 0.5905 | 2.0 | 11794 | 0.5995 |
| 0.5777 | 3.0 | 17691 | 0.5893 |
| 0.574 | 4.0 | 23588 | 0.5837 |
| 0.5553 | 5.0 | 29485 | 0.5807 |
| 0.5647 | 6.0 | 35382 | 0.5796 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
## References
The list of special tokens used for generation recipe structure has been taken from:
[RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://www.aclweb.org/anthology/2020.inlg-1.4.pdf).
|
azwierzc/visualbert-vqa-pl-v2 | 7b22112d72aa33d7a8c6040a4a8405f3df987163 | 2022-04-08T17:27:22.000Z | [
"pytorch",
"visual_bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | azwierzc | null | azwierzc/visualbert-vqa-pl-v2 | 26 | null | transformers | 7,569 | Entry not found |
agdsga/nezha-chinese-base-finetuned-product | df319d6a680c2dc1f9f83c81eeed8b471fee13fa | 2022-04-08T06:12:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | agdsga | null | agdsga/nezha-chinese-base-finetuned-product | 26 | null | transformers | 7,570 | ---
tags:
- generated_from_trainer
model-index:
- name: nezha-chinese-base-finetuned-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nezha-chinese-base-finetuned-product
This model is a fine-tuned version of [peterchou/nezha-chinese-base](https://huggingface.co/peterchou/nezha-chinese-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0309 | 1.0 | 6473 | 0.0037 |
| 0.0033 | 2.0 | 12946 | 0.0006 |
| 0.0017 | 3.0 | 19419 | 0.0004 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
vocab-transformers/distilbert-tokenizer_256k-MLM_best | bfef0b2f4f40bd88744cf1360ef42a5599c9c215 | 2022-04-11T11:16:06.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | vocab-transformers | null | vocab-transformers/distilbert-tokenizer_256k-MLM_best | 26 | null | transformers | 7,571 | # DistilBERT with 256k token embeddings
This model was initialized with a word2vec token embedding matrix with 256k entries, but these token embeddings were updated during MLM. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.
Then the model was trained on this dataset with MLM for 1.55M steps (batch size 64). The token embeddings were updated during MLM.
|
nikhedward/bart-large-cnn-finetuned-multi-news | 6c150c04431d453a41e2492a3a425cee806cf9db | 2022-04-29T15:22:47.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:multi_news",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nikhedward | null | nikhedward/bart-large-cnn-finetuned-multi-news | 26 | null | transformers | 7,572 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 42.0423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-multi-news
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0950
- Rouge1: 42.0423
- Rouge2: 14.8812
- Rougel: 23.3412
- Rougelsum: 36.2613
## Model description
bart-large-cnn fine tuned on sample of multi-news dataset
## Intended uses & limitations
The intended use of the model is for downstream summarization tasks but it's limited to input text 1024 words. Any text longer than that would be truncated.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.2037 | 1.0 | 750 | 2.0950 | 42.0423 | 14.8812 | 23.3412 | 36.2613 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-sh-en | 052ec88282054d9eddfa0da15222852477182abe | 2022-06-01T13:01:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bs_Latn",
"en",
"hr",
"sh",
"sr_Cyrl",
"sr_Latn",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-sh-en | 26 | null | transformers | 7,573 | ---
language:
- bs_Latn
- en
- hr
- sh
- sr_Cyrl
- sr_Latn
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-sh-en
results:
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: flores101-devtest
type: flores_101
args: hrv eng devtest
metrics:
- name: BLEU
type: bleu
value: 37.1
- task:
name: Translation bos_Latn-eng
type: translation
args: bos_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bos_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 66.5
- task:
name: Translation hbs-eng
type: translation
args: hbs-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hbs-eng
metrics:
- name: BLEU
type: bleu
value: 56.4
- task:
name: Translation hrv-eng
type: translation
args: hrv-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrv-eng
metrics:
- name: BLEU
type: bleu
value: 58.8
- task:
name: Translation srp_Cyrl-eng
type: translation
args: srp_Cyrl-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Cyrl-eng
metrics:
- name: BLEU
type: bleu
value: 44.7
- task:
name: Translation srp_Latn-eng
type: translation
args: srp_Latn-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Latn-eng
metrics:
- name: BLEU
type: bleu
value: 58.4
---
# opus-mt-tc-big-sh-en
Neural machine translation model for translating from Serbo-Croatian (sh) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): bos_Latn hrv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT hbs-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Ispostavilo se da je istina.",
"Ovaj vikend imamo besplatne pozive."
]
model_name = "pytorch-models/opus-mt-tc-big-sh-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Turns out it's true.
# We got free calls this weekend.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-sh-en")
print(pipe("Ispostavilo se da je istina."))
# expected output: Turns out it's true.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.80010 | 66.5 | 301 | 1826 |
| hbs-eng | tatoeba-test-v2021-08-07 | 0.71744 | 56.4 | 10017 | 68934 |
| hrv-eng | tatoeba-test-v2021-08-07 | 0.73563 | 58.8 | 1480 | 10620 |
| srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.68248 | 44.7 | 1580 | 10181 |
| srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71781 | 58.4 | 6656 | 46307 |
| hrv-eng | flores101-devtest | 0.63948 | 37.1 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:21:10 EEST 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-hu-en | 11be72afb7594e726e543badd3bd658922afe715 | 2022-06-01T13:01:06.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"hu",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-hu-en | 26 | null | transformers | 7,574 | ---
language:
- en
- hu
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-hu-en
results:
- task:
name: Translation hun-eng
type: translation
args: hun-eng
dataset:
name: flores101-devtest
type: flores_101
args: hun eng devtest
metrics:
- name: BLEU
type: bleu
value: 34.6
- task:
name: Translation hun-eng
type: translation
args: hun-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hun-eng
metrics:
- name: BLEU
type: bleu
value: 50.4
- task:
name: Translation hun-eng
type: translation
args: hun-eng
dataset:
name: newstest2009
type: wmt-2009-news
args: hun-eng
metrics:
- name: BLEU
type: bleu
value: 23.4
---
# opus-mt-tc-big-hu-en
Neural machine translation model for translating from Hungarian (hu) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): hun
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT hun-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Bárcsak ne láttam volna ilyen borzalmas filmet!",
"Iskolában van."
]
model_name = "pytorch-models/opus-mt-tc-big-hu-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I wish I hadn't seen such a terrible movie.
# She's at school.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-hu-en")
print(pipe("Bárcsak ne láttam volna ilyen borzalmas filmet!"))
# expected output: I wish I hadn't seen such a terrible movie.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| hun-eng | tatoeba-test-v2021-08-07 | 0.66644 | 50.4 | 13037 | 94699 |
| hun-eng | flores101-devtest | 0.61974 | 34.6 | 1012 | 24721 |
| hun-eng | newssyscomb2009 | 0.52563 | 24.7 | 502 | 11818 |
| hun-eng | newstest2009 | 0.51698 | 23.4 | 2525 | 65399 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:33:38 EEST 2022
* port machine: LM0-400-22516.local
|
chenshuangcufe/Bert-job | 8d613690c9d25ac4ab473f598ba6683ac24c3ec2 | 2022-04-21T08:10:11.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | chenshuangcufe | null | chenshuangcufe/Bert-job | 26 | null | transformers | 7,575 | Entry not found |
doc2query/msmarco-vietnamese-mt5-base-v1 | 1ac7b8c530c4dcbce052a0f1b7c2beca48ad21f5 | 2022-04-29T22:06:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"vi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-vietnamese-mt5-base-v1 | 26 | 1 | transformers | 7,576 | ---
language: vi
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
license: apache-2.0
---
# doc2query/msmarco-vietnamese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-vietnamese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
TehranNLP-org/electra-base-sst2 | 53949b15b42995131562678a342f48d2280f3dcd | 2022-05-03T17:00:04.000Z | [
"pytorch",
"electra",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/electra-base-sst2 | 26 | null | transformers | 7,577 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SEED0042
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: SST2
type: ''
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9506880733944955
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEED0042
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- Accuracy: 0.9507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2105 | 0.2056 | 0.9358 |
| 0.2549 | 2.0 | 4210 | 0.1850 | 0.9438 |
| 0.1162 | 3.0 | 6315 | 0.1754 | 0.9507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
nbasatish/financial-pegasus | 1bd55e4aea4a0d7b76581c3f3a5ed738d968c909 | 2022-05-01T22:36:58.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | nbasatish | null | nbasatish/financial-pegasus | 26 | null | transformers | 7,578 | ---
license: apache-2.0
---
|
nikitast/multilang-classifier-roberta | 475e27c7ccf628507ea7218b86901ad1ecad7a46 | 2022-07-18T11:34:28.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ru",
"uk",
"be",
"kk",
"az",
"hy",
"ka",
"he",
"en",
"de",
"dataset:open_subtitles",
"dataset:tatoeba",
"dataset:oscar",
"transformers",
"language classification"
] | text-classification | false | nikitast | null | nikitast/multilang-classifier-roberta | 26 | null | transformers | 7,579 | ---
language:
- ru
- uk
- be
- kk
- az
- hy
- ka
- he
- en
- de
tags:
- language classification
datasets:
- open_subtitles
- tatoeba
- oscar
---
# RoBERTa for Multilabel Language Classification
## Training
RoBERTa fine-tuned on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language).
Implemented heuristic algorithm for multilingual training data creation - https://github.com/n1kstep/lang-classifier
| data source | language |
|-----------------|----------------|
| open_subtitles | ka, he, en, de |
| oscar | be, kk, az, hu |
| tatoeba | ru, uk |
## Validation
The metrics obtained from validation on the another part of dataset (~1k samples per language).
| Training Loss | Validation Loss | F1-Score | Roc Auc | Accuracy | Support |
|---------------|-----------------|----------|----------|----------|---------|
| 0.161500 | 0.110949 | 0.947844 | 0.953939 | 0.762063 | 26858 | |
Mathilda/T5-paraphrasing | 1ffe344dc45b713d3b212ac67ceba7739a1c215f | 2022-05-16T15:40:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | text2text-generation | false | Mathilda | null | Mathilda/T5-paraphrasing | 26 | null | transformers | 7,580 | ---
license: afl-3.0
---
|
FrGes/xlm-roberta-large-finetuned-EUJAV-datasetA | b2d12c1d77878c92576fd890ec40851946258dba | 2022-05-18T11:29:30.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | FrGes | null | FrGes/xlm-roberta-large-finetuned-EUJAV-datasetA | 26 | null | transformers | 7,581 | Fine-tuned model based on
#XLM-RoBERTa (large-sized model)
Data for finetuning:
Italian vaccine stance data: 781 training tweets and 281 evaluation tweets
#BibTeX entry and citation info
to be added |
microsoft/cvt-w24-384-22k | a9aa85d4952c0bf1531fdc878b8c04c8cbbb2ec8 | 2022-05-18T17:18:47.000Z | [
"pytorch",
"cvt",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2103.15808",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/cvt-w24-384-22k | 26 | null | transformers | 7,582 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Convolutional Vision Transformer (CvT)
CvT-w24 model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT).
Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-w24-384-22k')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-w24-384-22k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
fujiki/t5-large-en2ja | 8a3d74abae8d6ff3c4d99e757d1e4da17f419fa3 | 2022-05-21T14:30:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:cc-by-sa-3.0",
"autotrain_compatible"
] | text2text-generation | false | fujiki | null | fujiki/t5-large-en2ja | 26 | null | transformers | 7,583 | ---
license: cc-by-sa-3.0
---
|
ccdv/lsg-bart-base-4096-wcep | 203cea7c0c5a1cff5a721d625ddfa62661e563d1 | 2022-07-25T05:30:19.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:ccdv/WCEP-10",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-4096-wcep | 26 | null | transformers | 7,584 | ---
language:
- en
tags:
- summarization
datasets:
- ccdv/WCEP-10
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-4096-wcep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-wcep", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-wcep", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(text, truncation=True, max_length=64, no_repeat_ngram_size=7)
```
# ccdv/lsg-bart-base-4096-wcep
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the [ccdv/WCEP-10 roberta](https://huggingface.co/datasets/ccdv/WCEP-10) dataset. \
It achieves the following results on the test set:
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 256 | 0 | 768 | 46.02 | 24.23 | 37.38 | 38.72 |
| 4096 | Local | 128 | 0 | 384 | 45.43 | 23.86 | 36.94 | 38.30 |
| 4096 | Pooling | 128 | 4 | 644 | 45.36 | 23.61 | 36.75 | 38.06 |
| 4096 | Stride | 128 | 4 | 644 | 45.87 | 24.31 | 37.41 | 38.70 |
| 4096 | Block Stride | 128 | 4 | 644 | 45.78 | 24.16 | 37.20 | 38.48 |
| 4096 | Norm | 128 | 4 | 644 | 45.34 | 23.39 | 36.47 | 37.78 |
| 4096 | LSH | 128 | 4 | 644 | 45.15 | 23.53 | 36.74 | 38.02 |
With smaller block size (lower ressources):
| Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | Local | 64 | 0 | 192 | 44.48 | 22.98 | 36.20 | 37.52 |
| 4096 | Local | 32 | 0 | 96 | 43.60 | 22.17 | 35.61 | 36.66 |
| 4096 | Pooling | 32 | 4 | 160 | 43.91 | 22.41 | 35.80 | 36.92 |
| 4096 | Stride | 32 | 4 | 160 | 44.62 | 23.11 | 36.32 | 37.53 |
| 4096 | Block Stride | 32 | 4 | 160 | 44.47 | 23.02 | 36.28 | 37.46 |
| 4096 | Norm | 32 | 4 | 160 | 44.45 | 23.03 | 36.10 | 37.33 |
| 4096 | LSH | 32 | 4 | 160 | 43.87 | 22.50 | 35.75 | 36.93 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: ccdv/WCEP-10
- dataset_config_name: roberta
- eval_batch_size: 8
- eval_samples: 1022
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 64
- min_length: 0
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ericntay/distilbert-base-uncased-finetuned-emotion | 6fcf9ab0769e52370ca903119ec5d3e925472d8c | 2022-05-26T16:51:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ericntay | null | ericntay/distilbert-base-uncased-finetuned-emotion | 26 | null | transformers | 7,585 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240722191505606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2055
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7795 | 1.0 | 250 | 0.2920 | 0.911 | 0.9079 |
| 0.2373 | 2.0 | 500 | 0.2055 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ddobokki/electra-small-sts-cross-encoder | ec59c0d95dbc48687edb445f9e86b2e4c4c39052 | 2022-05-31T07:52:44.000Z | [
"pytorch",
"electra",
"text-classification",
"ko",
"transformers",
"sentence_transformers",
"cross_encoder"
] | text-classification | false | ddobokki | null | ddobokki/electra-small-sts-cross-encoder | 26 | null | transformers | 7,586 | ---
language:
- ko
tags:
- sentence_transformers
- cross_encoder
---
# Example
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('ddobokki/electra-small-sts-cross-encoder')
model.predict(["그녀는 행복해서 웃었다.", "그녀는 웃겨서 눈물이 났다."])
-> 0.8206561
```
# Dataset
- KorSTS
- Train
- Test
- KLUE STS
- Train
- Test
# Performance
| Dataset | Pearson corr.|Spearman corr.|
|--|--|--|
| KorSTS(test) + KLUE STS(test) | 0.8528 | 0.8504 |
# TODO
Using KLUE 1.1 train, dev data
|
rifkat/GPTuz | 2a7e6c05772bc155145b37cf904cc88fde2218de | 2022-06-09T09:13:55.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"uz",
"transformers",
"Text Generation",
"PyTorch",
"TensorFlow",
"Transformers",
"mit",
"license:apache-2.0"
] | text-generation | false | rifkat | null | rifkat/GPTuz | 26 | null | transformers | 7,587 | ---
language:
- uz
tags:
- Text Generation
- PyTorch
- TensorFlow
- Transformers
- mit
- uz
- gpt2
license: apache-2.0
widget:
- text: "Covid-19 га қарши эмлаш бошланди,"
example_title: "Namuna 1"
- text: "Суъний интеллект энг ривожланган"
example_title: "Namuna 2"
---
<p><b>GPTuzmodel.</b>
GPTuz GPT-2 kichik modelga asoslangan Uzbek tili uchun state-of-the-art til modeli.
Bu model GPU NVIDIA V100 32GB va 0.53 GB malumotlarni kun.uz dan foydalanilgan holda Transfer Learning va Fine-tuning texnikasi asosida 1 kundan ziyod vaqt davomida o'qitilgan.
<p><b>Qanday foydaniladi</b>
<pre><code class="language-python">
import torch
tokenizer = AutoTokenizer.from_pretrained("arxiv/uzwiki/gpt2-small-uzbek")
model = AutoModelWithLMHead.from_pretrained("arxiv/uzwiki/gpt2-small-uzbek")
tokenizer.model_max_length=1024
</code></pre>
<p><b>Bitta so'z yaratish</b>
<pre><code class="language-python">
text = "Covid-19 га қарши эмлаш бошланди,"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
print('input text:', text)
print('predicted text:', predicted_text)
</code></pre>
<p><b>Bitta to'liq ketma-ketlikni yarating </b>
<pre><code class="language-python">
text = "Covid-19 га қарши эмлаш бошланди, "
inputs = tokenizer(text, return_tensors="pt")
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # kerakli token raqamini qo'ying
top_k=40,
num_return_sequences=1)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
</code></pre>
|
ghadeermobasher/Orignial-BlueBERT-NCBI | 76c755792659af70fe151823537327468666b713 | 2022-06-09T15:18:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Orignial-BlueBERT-NCBI | 26 | null | transformers | 7,588 | Entry not found |
ChainYo/segformer-b1-sidewalk | 571324ba0f685ab8304f1f3a89aec92724711021 | 2022-06-14T16:33:48.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
] | image-segmentation | false | ChainYo | null | ChainYo/segformer-b1-sidewalk | 26 | null | transformers | 7,589 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
---
# SegFormer (b1-sized) model fine-tuned on sidewalk-semantic dataset
SegFormer model fine-tuned on segments/sidewalk-semantic at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor(reduce_labels=True)
model = SegformerForSemanticSegmentation.from_pretrained("ChainYo/segformer-sidewalk")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). |
Shaier/distilbert-base-uncased-continued_training-medqa | 10889756a9158acb28372809c661a4b98b5e80c4 | 2022-06-28T19:04:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Shaier | null | Shaier/distilbert-base-uncased-continued_training-medqa | 26 | null | transformers | 7,590 | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-continued_training-medqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-continued_training-medqa
This model is a fine-tuned version of [Shaier/distilbert-base-uncased-continued_training-medqa](https://huggingface.co/Shaier/distilbert-base-uncased-continued_training-medqa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 220
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 333 | 0.4516 |
| No log | 2.0 | 666 | 0.4277 |
| No log | 3.0 | 999 | 0.3734 |
| No log | 4.0 | 1332 | 0.4083 |
| No log | 5.0 | 1665 | 0.4134 |
| No log | 6.0 | 1998 | 0.5093 |
| No log | 7.0 | 2331 | 0.4639 |
| 0.4564 | 8.0 | 2664 | 0.5132 |
| 0.4564 | 9.0 | 2997 | 0.3483 |
| 0.4564 | 10.0 | 3330 | 0.4174 |
| 0.4564 | 11.0 | 3663 | 0.4975 |
| 0.4564 | 12.0 | 3996 | 0.4030 |
| 0.4564 | 13.0 | 4329 | 0.4476 |
| 0.4564 | 14.0 | 4662 | 0.3692 |
| 0.4564 | 15.0 | 4995 | 0.4474 |
| 0.4533 | 16.0 | 5328 | 0.3289 |
| 0.4533 | 17.0 | 5661 | 0.4647 |
| 0.4533 | 18.0 | 5994 | 0.4873 |
| 0.4533 | 19.0 | 6327 | 0.5323 |
| 0.4533 | 20.0 | 6660 | 0.4273 |
| 0.4533 | 21.0 | 6993 | 0.3426 |
| 0.4533 | 22.0 | 7326 | 0.3892 |
| 0.4533 | 23.0 | 7659 | 0.4297 |
| 0.4493 | 24.0 | 7992 | 0.4162 |
| 0.4493 | 25.0 | 8325 | 0.4424 |
| 0.4493 | 26.0 | 8658 | 0.4575 |
| 0.4493 | 27.0 | 8991 | 0.4192 |
| 0.4493 | 28.0 | 9324 | 0.4151 |
| 0.4493 | 29.0 | 9657 | 0.4321 |
| 0.4493 | 30.0 | 9990 | 0.4129 |
| 0.4493 | 31.0 | 10323 | 0.4869 |
| 0.4456 | 32.0 | 10656 | 0.4510 |
| 0.4456 | 33.0 | 10989 | 0.5263 |
| 0.4456 | 34.0 | 11322 | 0.3908 |
| 0.4456 | 35.0 | 11655 | 0.5016 |
| 0.4456 | 36.0 | 11988 | 0.4454 |
| 0.4456 | 37.0 | 12321 | 0.4011 |
| 0.4456 | 38.0 | 12654 | 0.4714 |
| 0.4456 | 39.0 | 12987 | 0.4972 |
| 0.443 | 40.0 | 13320 | 0.4200 |
| 0.443 | 41.0 | 13653 | 0.4659 |
| 0.443 | 42.0 | 13986 | 0.4758 |
| 0.443 | 43.0 | 14319 | 0.4509 |
| 0.443 | 44.0 | 14652 | 0.4211 |
| 0.443 | 45.0 | 14985 | 0.4007 |
| 0.443 | 46.0 | 15318 | 0.3205 |
| 0.443 | 47.0 | 15651 | 0.4479 |
| 0.4402 | 48.0 | 15984 | 0.4723 |
| 0.4402 | 49.0 | 16317 | 0.4956 |
| 0.4402 | 50.0 | 16650 | 0.4103 |
| 0.4402 | 51.0 | 16983 | 0.4234 |
| 0.4402 | 52.0 | 17316 | 0.4052 |
| 0.4402 | 53.0 | 17649 | 0.4033 |
| 0.4402 | 54.0 | 17982 | 0.4139 |
| 0.4402 | 55.0 | 18315 | 0.3618 |
| 0.4372 | 56.0 | 18648 | 0.5102 |
| 0.4372 | 57.0 | 18981 | 0.4166 |
| 0.4372 | 58.0 | 19314 | 0.4475 |
| 0.4372 | 59.0 | 19647 | 0.4259 |
| 0.4372 | 60.0 | 19980 | 0.4018 |
| 0.4372 | 61.0 | 20313 | 0.5005 |
| 0.4372 | 62.0 | 20646 | 0.4445 |
| 0.4372 | 63.0 | 20979 | 0.4280 |
| 0.434 | 64.0 | 21312 | 0.4533 |
| 0.434 | 65.0 | 21645 | 0.3672 |
| 0.434 | 66.0 | 21978 | 0.4726 |
| 0.434 | 67.0 | 22311 | 0.4084 |
| 0.434 | 68.0 | 22644 | 0.4508 |
| 0.434 | 69.0 | 22977 | 0.3746 |
| 0.434 | 70.0 | 23310 | 0.4703 |
| 0.434 | 71.0 | 23643 | 0.4789 |
| 0.4314 | 72.0 | 23976 | 0.3963 |
| 0.4314 | 73.0 | 24309 | 0.3800 |
| 0.4314 | 74.0 | 24642 | 0.5051 |
| 0.4314 | 75.0 | 24975 | 0.4245 |
| 0.4314 | 76.0 | 25308 | 0.4745 |
| 0.4314 | 77.0 | 25641 | 0.4351 |
| 0.4314 | 78.0 | 25974 | 0.4367 |
| 0.4314 | 79.0 | 26307 | 0.4200 |
| 0.4291 | 80.0 | 26640 | 0.4985 |
| 0.4291 | 81.0 | 26973 | 0.5058 |
| 0.4291 | 82.0 | 27306 | 0.4154 |
| 0.4291 | 83.0 | 27639 | 0.4837 |
| 0.4291 | 84.0 | 27972 | 0.3865 |
| 0.4291 | 85.0 | 28305 | 0.4357 |
| 0.4291 | 86.0 | 28638 | 0.3978 |
| 0.4291 | 87.0 | 28971 | 0.4413 |
| 0.4263 | 88.0 | 29304 | 0.4223 |
| 0.4263 | 89.0 | 29637 | 0.4241 |
| 0.4263 | 90.0 | 29970 | 0.4525 |
| 0.4263 | 91.0 | 30303 | 0.3895 |
| 0.4263 | 92.0 | 30636 | 0.4207 |
| 0.4263 | 93.0 | 30969 | 0.3217 |
| 0.4263 | 94.0 | 31302 | 0.3725 |
| 0.4263 | 95.0 | 31635 | 0.4354 |
| 0.4239 | 96.0 | 31968 | 0.4169 |
| 0.4239 | 97.0 | 32301 | 0.4873 |
| 0.4239 | 98.0 | 32634 | 0.4219 |
| 0.4239 | 99.0 | 32967 | 0.4984 |
| 0.4239 | 100.0 | 33300 | 0.4078 |
| 0.4239 | 101.0 | 33633 | 0.4463 |
| 0.4239 | 102.0 | 33966 | 0.3371 |
| 0.4239 | 103.0 | 34299 | 0.3896 |
| 0.422 | 104.0 | 34632 | 0.4743 |
| 0.422 | 105.0 | 34965 | 0.4931 |
| 0.422 | 106.0 | 35298 | 0.3574 |
| 0.422 | 107.0 | 35631 | 0.4127 |
| 0.422 | 108.0 | 35964 | 0.3892 |
| 0.422 | 109.0 | 36297 | 0.3881 |
| 0.422 | 110.0 | 36630 | 0.4221 |
| 0.422 | 111.0 | 36963 | 0.3924 |
| 0.4204 | 112.0 | 37296 | 0.4067 |
| 0.4204 | 113.0 | 37629 | 0.4357 |
| 0.4204 | 114.0 | 37962 | 0.4175 |
| 0.4204 | 115.0 | 38295 | 0.4424 |
| 0.4204 | 116.0 | 38628 | 0.3925 |
| 0.4204 | 117.0 | 38961 | 0.4693 |
| 0.4204 | 118.0 | 39294 | 0.3503 |
| 0.4204 | 119.0 | 39627 | 0.4761 |
| 0.4183 | 120.0 | 39960 | 0.3816 |
| 0.4183 | 121.0 | 40293 | 0.3903 |
| 0.4183 | 122.0 | 40626 | 0.3535 |
| 0.4183 | 123.0 | 40959 | 0.4388 |
| 0.4183 | 124.0 | 41292 | 0.4519 |
| 0.4183 | 125.0 | 41625 | 0.4241 |
| 0.4183 | 126.0 | 41958 | 0.4085 |
| 0.4183 | 127.0 | 42291 | 0.4836 |
| 0.4168 | 128.0 | 42624 | 0.4101 |
| 0.4168 | 129.0 | 42957 | 0.4749 |
| 0.4168 | 130.0 | 43290 | 0.4022 |
| 0.4168 | 131.0 | 43623 | 0.4861 |
| 0.4168 | 132.0 | 43956 | 0.4376 |
| 0.4168 | 133.0 | 44289 | 0.4597 |
| 0.4168 | 134.0 | 44622 | 0.4154 |
| 0.4168 | 135.0 | 44955 | 0.4431 |
| 0.415 | 136.0 | 45288 | 0.4887 |
| 0.415 | 137.0 | 45621 | 0.4229 |
| 0.415 | 138.0 | 45954 | 0.3997 |
| 0.415 | 139.0 | 46287 | 0.4185 |
| 0.415 | 140.0 | 46620 | 0.4633 |
| 0.415 | 141.0 | 46953 | 0.4061 |
| 0.415 | 142.0 | 47286 | 0.4604 |
| 0.415 | 143.0 | 47619 | 0.4047 |
| 0.4139 | 144.0 | 47952 | 0.4272 |
| 0.4139 | 145.0 | 48285 | 0.4783 |
| 0.4139 | 146.0 | 48618 | 0.3954 |
| 0.4139 | 147.0 | 48951 | 0.4501 |
| 0.4139 | 148.0 | 49284 | 0.4941 |
| 0.4139 | 149.0 | 49617 | 0.4112 |
| 0.4139 | 150.0 | 49950 | 0.4582 |
| 0.4139 | 151.0 | 50283 | 0.4361 |
| 0.4126 | 152.0 | 50616 | 0.3535 |
| 0.4126 | 153.0 | 50949 | 0.3797 |
| 0.4126 | 154.0 | 51282 | 0.4080 |
| 0.4126 | 155.0 | 51615 | 0.4049 |
| 0.4126 | 156.0 | 51948 | 0.4255 |
| 0.4126 | 157.0 | 52281 | 0.4303 |
| 0.4126 | 158.0 | 52614 | 0.4950 |
| 0.4126 | 159.0 | 52947 | 0.3721 |
| 0.4114 | 160.0 | 53280 | 0.2861 |
| 0.4114 | 161.0 | 53613 | 0.3775 |
| 0.4114 | 162.0 | 53946 | 0.4274 |
| 0.4114 | 163.0 | 54279 | 0.3904 |
| 0.4114 | 164.0 | 54612 | 0.4687 |
| 0.4114 | 165.0 | 54945 | 0.4013 |
| 0.4114 | 166.0 | 55278 | 0.4760 |
| 0.4114 | 167.0 | 55611 | 0.3554 |
| 0.4104 | 168.0 | 55944 | 0.5193 |
| 0.4104 | 169.0 | 56277 | 0.4476 |
| 0.4104 | 170.0 | 56610 | 0.5011 |
| 0.4104 | 171.0 | 56943 | 0.4441 |
| 0.4104 | 172.0 | 57276 | 0.4457 |
| 0.4104 | 173.0 | 57609 | 0.3792 |
| 0.4104 | 174.0 | 57942 | 0.5116 |
| 0.4104 | 175.0 | 58275 | 0.4249 |
| 0.4097 | 176.0 | 58608 | 0.3804 |
| 0.4097 | 177.0 | 58941 | 0.3886 |
| 0.4097 | 178.0 | 59274 | 0.4420 |
| 0.4097 | 179.0 | 59607 | 0.3573 |
| 0.4097 | 180.0 | 59940 | 0.3635 |
| 0.4097 | 181.0 | 60273 | 0.4596 |
| 0.4097 | 182.0 | 60606 | 0.3674 |
| 0.4097 | 183.0 | 60939 | 0.3869 |
| 0.409 | 184.0 | 61272 | 0.3909 |
| 0.409 | 185.0 | 61605 | 0.4339 |
| 0.409 | 186.0 | 61938 | 0.4475 |
| 0.409 | 187.0 | 62271 | 0.3218 |
| 0.409 | 188.0 | 62604 | 0.3771 |
| 0.409 | 189.0 | 62937 | 0.4007 |
| 0.409 | 190.0 | 63270 | 0.4520 |
| 0.409 | 191.0 | 63603 | 0.3980 |
| 0.4077 | 192.0 | 63936 | 0.4572 |
| 0.4077 | 193.0 | 64269 | 0.3952 |
| 0.4077 | 194.0 | 64602 | 0.4384 |
| 0.4077 | 195.0 | 64935 | 0.4795 |
| 0.4077 | 196.0 | 65268 | 0.3743 |
| 0.4077 | 197.0 | 65601 | 0.4445 |
| 0.4077 | 198.0 | 65934 | 0.3925 |
| 0.4077 | 199.0 | 66267 | 0.4564 |
| 0.4075 | 200.0 | 66600 | 0.4580 |
| 0.4075 | 201.0 | 66933 | 0.4446 |
| 0.4075 | 202.0 | 67266 | 0.4289 |
| 0.4075 | 203.0 | 67599 | 0.3722 |
| 0.4075 | 204.0 | 67932 | 0.4810 |
| 0.4075 | 205.0 | 68265 | 0.4004 |
| 0.4075 | 206.0 | 68598 | 0.4219 |
| 0.4075 | 207.0 | 68931 | 0.3926 |
| 0.407 | 208.0 | 69264 | 0.6043 |
| 0.407 | 209.0 | 69597 | 0.3835 |
| 0.407 | 210.0 | 69930 | 0.3791 |
| 0.407 | 211.0 | 70263 | 0.4152 |
| 0.407 | 212.0 | 70596 | 0.3654 |
| 0.407 | 213.0 | 70929 | 0.4434 |
| 0.407 | 214.0 | 71262 | 0.3613 |
| 0.407 | 215.0 | 71595 | 0.5103 |
| 0.4069 | 216.0 | 71928 | 0.3733 |
| 0.4069 | 217.0 | 72261 | 0.4881 |
| 0.4069 | 218.0 | 72594 | 0.3375 |
| 0.4069 | 219.0 | 72927 | 0.4766 |
| 0.4069 | 220.0 | 73260 | 0.4604 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
PrimeQA/mt5-base-tydi-question-generator | ac1f94b893b071bff2eee5a26f5ef0a75513f846 | 2022-07-13T10:38:38.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | PrimeQA | null | PrimeQA/mt5-base-tydi-question-generator | 26 | null | transformers | 7,591 | ---
license: apache-2.0
---
# Model description
This is an [mt5-base](https://huggingface.co/google/mt5-base) model, finetuned to generate questions using [TyDi QA](https://huggingface.co/datasets/tydiqa) dataset. It was trained to take the context and answer as input to generate questions.
# Overview
*Language model*: mT5-base \
*Language*: Arabic, Bengali, English, Finnish, Indonesian, Korean, Russian, Swahili, Telugu \
*Task*: Question Generation \
*Data*: TyDi QA
# Intented use and limitations
One can use this model to generate questions. Biases associated with pre-training of mT5 and TyDiQA dataset may be present.
## Usage
One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqg/notebooks/qg/tableqg_inference.ipynb).
Or
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("PrimeQA/mt5-base-tydi-question-generator")
model = AutoModelForSeq2SeqLM.from_pretrained("PrimeQA/mt5-base-tydi-question-generator")
def get_question(answer, context, max_length=64):
input_text = answer +" <<sep>> " + context
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0])
context = "শচীন টেন্ডুলকারকে ক্রিকেট ইতিহাসের অন্যতম সেরা ব্যাটসম্যান হিসেবে গণ্য করা হয়।"
answer = "শচীন টেন্ডুলকার"
get_question(answer, context)
# output: ক্রিকেট ইতিহাসের অন্যতম সেরা ব্যাটসম্যান কে?
```
## Citation
```bibtex
@inproceedings{xue2021mt5,
title={mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer},
author={Xue, Linting and Constant, Noah and Roberts, Adam and
Kale, Mihir and Al-Rfou, Rami and Siddhant, Aditya and
Barua, Aditya and Raffel, Colin},
booktitle={Proceedings of the 2021 Conference of the North American
Chapter of the Association for Computational Linguistics:
Human Language Technologies},
pages={483--498},
year={2021}
}
```
|
djagatiya/ner-roberta-base-ontonotesv5-englishv4 | 28af6bc088b67380ed35c7eb3ef3a0320149acc1 | 2022-07-03T11:27:14.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:djagatiya/ner-ontonotes-v5-eng-v4",
"transformers",
"autotrain_compatible"
] | token-classification | false | djagatiya | null | djagatiya/ner-roberta-base-ontonotesv5-englishv4 | 26 | null | transformers | 7,592 | ---
tags:
- token-classification
datasets:
- djagatiya/ner-ontonotes-v5-eng-v4
widget:
- text: "On September 1st George won 1 dollar while watching Game of Thrones."
---
# (NER) roberta-base : conll2012_ontonotesv5-english-v4
This `roberta-base` NER model was finetuned on `conll2012_ontonotesv5` version `english-v4` dataset. <br>
Check out [NER-System Repository](https://github.com/djagatiya/NER-System) for more information.
## Dataset
- conll2012_ontonotesv5
- Language : English
- Version : v4
| Dataset | Examples |
| --- | --- |
| Training | 75187 |
| Testing | 9479 |
## Evaluation
- Precision: 88.88
- Recall: 90.69
- F1-Score: 89.78
> check out this [eval.log](eval.log) file for evaluation metrics and classification report.
```
precision recall f1-score support
CARDINAL 0.84 0.85 0.85 935
DATE 0.85 0.90 0.87 1602
EVENT 0.67 0.76 0.71 63
FAC 0.74 0.72 0.73 135
GPE 0.97 0.96 0.96 2240
LANGUAGE 0.83 0.68 0.75 22
LAW 0.66 0.62 0.64 40
LOC 0.74 0.80 0.77 179
MONEY 0.85 0.89 0.87 314
NORP 0.93 0.96 0.95 841
ORDINAL 0.81 0.89 0.85 195
ORG 0.90 0.91 0.91 1795
PERCENT 0.90 0.92 0.91 349
PERSON 0.95 0.95 0.95 1988
PRODUCT 0.74 0.83 0.78 76
QUANTITY 0.76 0.80 0.78 105
TIME 0.62 0.67 0.65 212
WORK_OF_ART 0.58 0.69 0.63 166
micro avg 0.89 0.91 0.90 11257
macro avg 0.80 0.82 0.81 11257
weighted avg 0.89 0.91 0.90 11257
```
## Usage
```
from transformers import pipeline
ner_pipeline = pipeline(
'token-classification',
model=r'djagatiya/ner-roberta-base-ontonotesv5-englishv4',
aggregation_strategy='simple'
)
```
TEST 1
```
ner_pipeline("India is a beautiful country")
```
```
# Output
[{'entity_group': 'GPE',
'score': 0.99186057,
'word': ' India',
'start': 0,
'end': 5}]
```
TEST 2
```
ner_pipeline("On September 1st George won 1 dollar while watching Game of Thrones.")
```
```
# Output
[{'entity_group': 'DATE',
'score': 0.99720246,
'word': ' September 1st',
'start': 3,
'end': 16},
{'entity_group': 'PERSON',
'score': 0.99071586,
'word': ' George',
'start': 17,
'end': 23},
{'entity_group': 'MONEY',
'score': 0.9872978,
'word': ' 1 dollar',
'start': 28,
'end': 36},
{'entity_group': 'WORK_OF_ART',
'score': 0.9946732,
'word': ' Game of Thrones',
'start': 52,
'end': 67}]
``` |
huggingtweets/dinidu | 670f083816d5fc3562d7ff6618d4a61989866fa2 | 2022-07-07T13:00:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dinidu | 26 | null | transformers | 7,593 | ---
language: en
thumbnail: http://www.huggingtweets.com/dinidu/1657198765981/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539625904313360390/RV2fIY5V_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dinidu de Alwis</div>
<div style="text-align: center; font-size: 14px;">@dinidu</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dinidu de Alwis.
| Data | Dinidu de Alwis |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 764 |
| Short tweets | 433 |
| Tweets kept | 2032 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20j5ss79/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dinidu's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21s242x3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21s242x3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dinidu')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
malteos/gpt2-xl-german-covid-19 | 1298d0cdc9d3af23ed07ef3125349e2de3e5edfc | 2022-07-08T13:48:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"de",
"transformers",
"license:mit"
] | text-generation | false | malteos | null | malteos/gpt2-xl-german-covid-19 | 26 | null | transformers | 7,594 | ---
license: mit
language: de
widget:
- text: "Noch Wochen nach einer Erkrankung an COVID-19 können "
---
# German Covid-19 GPT2-XL (1.5B)
- Covid-19 specific version of [`malteos/gpt2-xl-wechsel-german`](https://huggingface.co/malteos/gpt2-xl-wechsel-german)
- Fine-tuned on 2 GB text from OSCAR filtered for covid related terms.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='malteos/gpt2-xl-german-covid-19')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
## License
MIT |
AndyChiang/bert-test | 2d299bfe9d01b27ccbadd6ae0f643643604c35a8 | 2022-07-11T05:50:10.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
] | fill-mask | false | AndyChiang | null | AndyChiang/bert-test | 26 | null | transformers | 7,595 | ---
tags:
- generated_from_keras_callback
model-index:
- name: bert-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-test
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
p-christ/testrepo | a205a3f1ee51f9b1784d92a71b31caab4d7f1d7e | 2022-07-11T15:55:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"generic"
] | text2text-generation | false | p-christ | null | p-christ/testrepo | 26 | null | generic | 7,596 | ---
tags:
- text2text-generation
library_name: generic
---
random test repo |
abecode/t5-small-finetuned-emo20q | 9606e2a03500b44c351be05ae9c6abe1a71e389e | 2022-07-11T17:56:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | abecode | null | abecode/t5-small-finetuned-emo20q | 26 | 1 | transformers | 7,597 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-emo20q
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-emo20q
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 280 | 2.4896 | 52.8448 | 0.0 | 52.8423 | 52.8708 | 2.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Vikasbhandari/wav2vec2-train | 1e017454b6d3fdcae5104b5a1ac5a3411caa8091 | 2022-07-12T11:51:48.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.11430",
"arxiv:2006.11477",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Vikasbhandari | null | Vikasbhandari/wav2vec2-train | 26 | null | transformers | 7,598 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-960h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.9
---
# Wav2Vec2-Large-960h-Lv60 + Self-Training
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.9 | 3.9 | |
thusken/nb-bert-base-user-needs | 688bd9f5003ea4d16f66b01eed6a8ae4c2581715 | 2022-07-15T10:15:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index"
] | text-classification | false | thusken | null | thusken/nb-bert-base-user-needs | 26 | null | transformers | 7,599 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: nb-bert-base-user-needs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nb-bert-base-user-needs
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0600
- Accuracy: 0.8479
- F1: 0.8319
- Precision: 0.8315
- Recall: 0.8479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 98 | 1.1222 | 0.6263 | 0.5185 | 0.5076 | 0.6263 |
| No log | 2.0 | 196 | 1.0066 | 0.7216 | 0.6436 | 0.5899 | 0.7216 |
| No log | 3.0 | 294 | 0.8540 | 0.7577 | 0.7037 | 0.6760 | 0.7577 |
| No log | 4.0 | 392 | 0.8621 | 0.7603 | 0.6998 | 0.6568 | 0.7603 |
| No log | 5.0 | 490 | 0.8062 | 0.7887 | 0.7500 | 0.7449 | 0.7887 |
| 0.91 | 6.0 | 588 | 0.7465 | 0.8041 | 0.7660 | 0.7636 | 0.8041 |
| 0.91 | 7.0 | 686 | 0.6324 | 0.8247 | 0.8163 | 0.8187 | 0.8247 |
| 0.91 | 8.0 | 784 | 0.7333 | 0.7964 | 0.7703 | 0.7740 | 0.7964 |
| 0.91 | 9.0 | 882 | 0.6590 | 0.8325 | 0.8208 | 0.8106 | 0.8325 |
| 0.91 | 10.0 | 980 | 0.9854 | 0.8196 | 0.7890 | 0.7920 | 0.8196 |
| 0.4246 | 11.0 | 1078 | 0.7023 | 0.8247 | 0.8054 | 0.8138 | 0.8247 |
| 0.4246 | 12.0 | 1176 | 0.8995 | 0.8325 | 0.8120 | 0.8068 | 0.8325 |
| 0.4246 | 13.0 | 1274 | 0.8589 | 0.8299 | 0.8145 | 0.8058 | 0.8299 |
| 0.4246 | 14.0 | 1372 | 0.9859 | 0.8376 | 0.8151 | 0.8123 | 0.8376 |
| 0.4246 | 15.0 | 1470 | 0.8452 | 0.8402 | 0.8318 | 0.8341 | 0.8402 |
| 0.1637 | 16.0 | 1568 | 1.1156 | 0.8351 | 0.8157 | 0.8196 | 0.8351 |
| 0.1637 | 17.0 | 1666 | 1.1514 | 0.8325 | 0.8122 | 0.8218 | 0.8325 |
| 0.1637 | 18.0 | 1764 | 1.0092 | 0.8428 | 0.8266 | 0.8320 | 0.8428 |
| 0.1637 | 19.0 | 1862 | 1.0368 | 0.8351 | 0.8229 | 0.8287 | 0.8351 |
| 0.1637 | 20.0 | 1960 | 1.0600 | 0.8479 | 0.8319 | 0.8315 | 0.8479 |
| 0.0391 | 21.0 | 2058 | 1.1046 | 0.8428 | 0.8293 | 0.8269 | 0.8428 |
| 0.0391 | 22.0 | 2156 | 1.1178 | 0.8454 | 0.8262 | 0.8280 | 0.8454 |
| 0.0391 | 23.0 | 2254 | 1.1103 | 0.8428 | 0.8268 | 0.8295 | 0.8428 |
| 0.0391 | 24.0 | 2352 | 1.1179 | 0.8428 | 0.8274 | 0.8313 | 0.8428 |
| 0.0391 | 25.0 | 2450 | 1.1134 | 0.8402 | 0.8233 | 0.8254 | 0.8402 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.