modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rajistics/distilbert-imdb-mlflow | 7d3e3d7107f8b175c90e34a1f74ba46fdc09da36 | 2022-07-20T21:06:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | rajistics | null | rajistics/distilbert-imdb-mlflow | 13 | null | transformers | 10,400 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb-mlflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-mlflow
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
okho0653/Bio_ClinicalBERT-zero-shot-finetuned-50cad-50noncad | 5eb1d9075118a203522116d6fe5f23966fcc02d0 | 2022-07-22T03:50:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | okho0653 | null | okho0653/Bio_ClinicalBERT-zero-shot-finetuned-50cad-50noncad | 13 | null | transformers | 10,401 | Entry not found |
doya/klue-sentiment-aihub | cb55e0a40fd90bd499829633c2d75102aebd0f5e | 2022-07-22T06:53:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | doya | null | doya/klue-sentiment-aihub | 13 | null | transformers | 10,402 | Entry not found |
Siyong/MC_RN_LM | 833b466e58d517d2ff7247ced195127f8bf75927 | 2022-07-23T17:16:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Siyong | null | Siyong/MC_RN_LM | 13 | null | transformers | 10,403 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Millad_Customer_RN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Millad_Customer_RN
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5635
- Wer: 0.8113
- Cer: 0.4817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| 1.9257 | 13.33 | 2000 | 2.0606 | 0.9767 | 0.5500 |
| 1.4828 | 26.67 | 4000 | 2.1161 | 0.9019 | 0.4932 |
| 1.2582 | 40.0 | 6000 | 2.0589 | 0.8504 | 0.4942 |
| 0.9804 | 53.33 | 8000 | 2.4633 | 0.8745 | 0.4763 |
| 0.7862 | 66.67 | 10000 | 2.4794 | 0.8861 | 0.4944 |
| 0.6492 | 80.0 | 12000 | 2.8693 | 0.8554 | 0.4928 |
| 0.5375 | 93.33 | 14000 | 2.6125 | 0.8296 | 0.4802 |
| 0.4462 | 106.67 | 16000 | 2.7591 | 0.8770 | 0.4974 |
| 0.3873 | 120.0 | 18000 | 3.0325 | 0.8379 | 0.4800 |
| 0.3445 | 133.33 | 20000 | 2.9965 | 0.8761 | 0.4986 |
| 0.3087 | 146.67 | 22000 | 3.3437 | 0.8221 | 0.4923 |
| 0.2755 | 160.0 | 24000 | 3.3022 | 0.8803 | 0.5211 |
| 0.2467 | 173.33 | 26000 | 3.2348 | 0.8479 | 0.4933 |
| 0.2281 | 186.67 | 28000 | 3.8010 | 0.8695 | 0.5081 |
| 0.2119 | 200.0 | 30000 | 3.0446 | 0.8545 | 0.4902 |
| 0.194 | 213.33 | 32000 | 3.0873 | 0.8454 | 0.4840 |
| 0.1677 | 226.67 | 34000 | 3.6184 | 0.8645 | 0.5019 |
| 0.1642 | 240.0 | 36000 | 3.2480 | 0.8412 | 0.4903 |
| 0.1656 | 253.33 | 38000 | 3.4379 | 0.8362 | 0.4816 |
| 0.1371 | 266.67 | 40000 | 3.5117 | 0.8479 | 0.5040 |
| 0.1301 | 280.0 | 42000 | 3.4360 | 0.8404 | 0.4870 |
| 0.128 | 293.33 | 44000 | 3.6589 | 0.8537 | 0.4977 |
| 0.1152 | 306.67 | 46000 | 4.2359 | 0.8545 | 0.5051 |
| 0.1119 | 320.0 | 48000 | 3.5818 | 0.7980 | 0.4882 |
| 0.1026 | 333.33 | 50000 | 3.7618 | 0.8013 | 0.4865 |
| 0.0945 | 346.67 | 52000 | 4.2197 | 0.8404 | 0.5028 |
| 0.0962 | 360.0 | 54000 | 3.9231 | 0.8653 | 0.5030 |
| 0.088 | 373.33 | 56000 | 3.8400 | 0.8354 | 0.4914 |
| 0.0743 | 386.67 | 58000 | 3.4924 | 0.8088 | 0.4824 |
| 0.0811 | 400.0 | 60000 | 3.8370 | 0.8396 | 0.4861 |
| 0.0696 | 413.33 | 62000 | 4.2808 | 0.8412 | 0.5065 |
| 0.0692 | 426.67 | 64000 | 4.0161 | 0.8088 | 0.4744 |
| 0.0622 | 440.0 | 66000 | 3.9080 | 0.8163 | 0.4910 |
| 0.0591 | 453.33 | 68000 | 3.9838 | 0.8113 | 0.4823 |
| 0.0527 | 466.67 | 70000 | 3.8067 | 0.8329 | 0.4914 |
| 0.056 | 480.0 | 72000 | 4.1415 | 0.8096 | 0.4782 |
| 0.0535 | 493.33 | 74000 | 4.3350 | 0.8229 | 0.4828 |
| 0.0531 | 506.67 | 76000 | 3.9808 | 0.8071 | 0.4807 |
| 0.0451 | 520.0 | 78000 | 4.0301 | 0.7988 | 0.4816 |
| 0.044 | 533.33 | 80000 | 4.4680 | 0.8371 | 0.4921 |
| 0.0389 | 546.67 | 82000 | 4.1380 | 0.8121 | 0.4819 |
| 0.0392 | 560.0 | 84000 | 4.3910 | 0.7930 | 0.4763 |
| 0.0389 | 573.33 | 86000 | 4.5086 | 0.8055 | 0.4802 |
| 0.0355 | 586.67 | 88000 | 4.6259 | 0.8113 | 0.4821 |
| 0.0307 | 600.0 | 90000 | 4.5635 | 0.8113 | 0.4817 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
grantpitt/autotagger | 7e560f0a65b5542cfa70c044a20955134cbac441 | 2022-07-24T18:03:29.000Z | [
"pytorch",
"vision-text-dual-encoder",
"feature-extraction",
"transformers"
]
| feature-extraction | false | grantpitt | null | grantpitt/autotagger | 13 | null | transformers | 10,404 | Entry not found |
tnavin/distilbert-base-uncased-finetuned-ner | a0465749e562171d3daa4a5dced8cf0a4c104be0 | 2022-07-24T08:58:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:wnut_17",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tnavin | null | tnavin/distilbert-base-uncased-finetuned-ner | 13 | null | transformers | 10,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5899772209567198
- name: Recall
type: recall
value: 0.4117647058823529
- name: F1
type: f1
value: 0.4850187265917604
- name: Accuracy
type: accuracy
value: 0.9304392705585502
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3202
- Precision: 0.5900
- Recall: 0.4118
- F1: 0.4850
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.3469 | 0.5480 | 0.2814 | 0.3718 | 0.9193 |
| No log | 2.0 | 426 | 0.3135 | 0.5909 | 0.3903 | 0.4701 | 0.9281 |
| 0.1903 | 3.0 | 639 | 0.3202 | 0.5900 | 0.4118 | 0.4850 | 0.9304 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
thu-coai/EVA2.0-large | be4c935951812c16a467ea8a75f5a45591970c49 | 2022-07-25T03:40:50.000Z | [
"pytorch",
"zh",
"arxiv:2108.01547",
"arxiv:2203.09313",
"transformers",
"license:mit"
]
| null | false | thu-coai | null | thu-coai/EVA2.0-large | 13 | 1 | transformers | 10,406 | ---
language: zh
tags:
- pytorch
license: mit
---
# EVA
## Model Description
EVA is the largest open-source Chinese dialogue model with up to 2.8B parameters. The 1.0 version model is pre-trained on [WudaoCorpus-Dialog](https://resource.wudaoai.cn/home), and the 2.0 version is pre-trained on a carefully cleaned version of WudaoCorpus-Dialog which yields better performance than the 1.0 version. [Paper link](https://arxiv.org/abs/2108.01547) of EVA1.0. [Paper link](https://arxiv.org/abs/2203.09313) of EVA2.0.
## Model Configuration
| Model | n_params | n_enc-layers | n_dec-layers | d_model | d_ff | n_heads | d_head | attn-scale |
| ------------- | -------- | ------------ | ------------ | ------- | ----- | ------- | ------ | ---------- |
| EVA1.0 | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | No |
| EVA2.0_Base | 300M | 12 | 12 | 768 | 3,072 | 12 | 64 | Yes |
| EVA2.0_Large | 970M | 24 | 24 | 1,024 | 4,096 | 16 | 64 | Yes |
| EVA2.0_xLarge | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | Yes |
## How to use
Please refer to our GitHub [repository](https://github.com/thu-coai/EVA/tree/huggingface).
## Performance
We use the KdConv dataset to finetune and evaluate the model. Under the default hyperparameters in the scripts, we get the following results on the test set:
| | | | | |
| ---- | ---- | ---- | ---- | ---- |
| | loss | BLEU-4 | Dist-4 | f1 |
| eva1.0 Before finetune | 3.49 | 0.64 | 85.94 | 13.24 |
| eva1.0 After finetune | 2.37 | 3.94 | 72.47 | 19.80 |
| eva2.0 Before finetune | 3.14 | 1.02 | 82.67 | 14.36 |
| eva2.0 After finetune | 2.09 | 4.69 | 73.03 | 22.91 |
We also use beam search to run the interactive inference of EVA2.0:
```[]
Usr >>> 你好
Sys >>> 你好,请问你是?
Usr >>> 我是小明
Sys >>> 好的,小明,我是新来的,请多关照
Usr >>> 你了解人工智能吗?
Sys >>> 不了解,我只知道我是机器人
Usr >>> 那你很聪明
Sys >>> 谢谢夸奖,我会继续努力的
Usr >>> 你是学什么专业的呀
Sys >>> 计算机科学与技术,你呢?
Usr >>> 我也是学计算机的
Sys >>> 哦,那你有什么想知道的,可以问我
Usr >>> 编程语言你学过哪些?
Sys >>> java,c,c+都学过
Usr >>> 你学的真多
Sys >>> 呵呵,还好吧,我比较喜欢写代码
Usr >>> 哈哈,我也喜欢写代码,以后可以多多交流
Sys >>> 嗯,好的
Usr >>> 拜拜喽
Sys >>> 拜~
```
**NOET**: Since different hardware may have different implementations of random functions, even if you use the same random seed as ours, you may not be able to reproduce this case. But the overall performance will not differ much.
## Disclaimer
The pre-trained models aim to facilitate the research for conversation generation. The model provided in this repository is trained on a large dataset collected from various sources. Although a rigorous cleaning and filtering process has been carried out to the data and the model output, there is no guarantee that all the inappropriate contents have been completely banned. All the contents generated by the model do not represent the authors' opinions. The decoding script provided in this repository is only for research purposes. We are not responsible for any content generated using our model.
## Citation
```
@article{coai2021eva,
title={EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training},
author={Zhou, Hao and Ke, Pei and Zhang, Zheng and Gu, Yuxian and Zheng, Yinhe and Zheng, Chujie and Wang, Yida and Wu, Chen Henry and Sun, Hao and Yang, Xiaocong and Wen, Bosi and Zhu, Xiaoyan and Huang, Minlie and Tang, Jie},
journal={arXiv preprint arXiv:2108.01547},
year={2021}
}
@article{coai2022eva2,
title={{EVA2.0}: Investigating Open-Domain Chinese Dialogue Systems with Large-Scale Pre-Training},
author={Gu, Yuxian and Wen, Jiaxin and Sun, Hao and Song, Yi and Ke, Pei and Zheng, Chujie and Zhang, Zheng and Yao, Jianzhu and Zhu, Xiaoyan and Tang, Jie and Huang, Minlie},
journal={arXiv preprint arXiv:2203.09313},
year={2022}
}
``` |
rufimelo/Legal-SBERTimbau-nli-large | 51ad3ba29400f8347e9eed885ae1314ee91b6a44 | 2022-07-25T15:48:08.000Z | [
"pytorch",
"bert",
"feature-extraction",
"pt",
"dataset:assin",
"dataset:assin2",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | rufimelo | null | rufimelo/Legal-SBERTimbau-nli-large | 13 | 1 | sentence-transformers | 10,407 | ---
language:
- pt
thumbnail: "Portugues SBERT for the Legal Domain"
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
datasets:
- assin
- assin2
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
example_title: "Example 1"
metrics:
- bleu
---
# rufimelo/Legal-SBERTimbau-nli-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
It is adapted to the Portuguese legal domain.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('rufimelo/Legal-SBERTimbau-nli-large')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-SBERTimbau-nli-large')
model = AutoModel.from_pretrained('rufimelo/Legal-SBERTimbau-nli-large}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results STS
| Model| Dataset | PearsonCorrelation |
| ---------------------------------------- | ---------- | ---------- |
| Legal-SBERTimbau-large| Assin | 0.766293861 |
| Legal-SBERTimbau-large| Assin2| 0.823565322 |
| ---------------------------------------- | ---------- |---------- |
| paraphrase-multilingual-mpnet-base-v2| Assin | 0.743740222 |
| paraphrase-multilingual-mpnet-base-v2| Assin2| 0.79831 |
| paraphrase-multilingual-mpnet-base-v2| stsb_multi_mt pt| 0.83999 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin | 0.77641 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin2| 0.79831 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| stsb_multi_mt pt| 0.84575 |
## Training
Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
It was trained for Natural Language Inference (NLI). This was chosen due to the lack of Portuguese available data.
In addition to that, it was submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin) and [assin2](https://huggingface.co/datasets/assin2) datasets.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
``` |
nielsr/donut-base-finetuned-cord-v2 | a5930f78491cdbcaf655b51facd3b8a1b305baae | 2022-07-26T09:46:25.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
]
| null | false | nielsr | null | nielsr/donut-base-finetuned-cord-v2 | 13 | null | transformers | 10,408 | Entry not found |
korca/roberta-large-lkm | 069aaaf41dae7e17adde25b3652b535c76bcd16e | 2022-07-25T16:03:38.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | korca | null | korca/roberta-large-lkm | 13 | null | transformers | 10,409 | Entry not found |
BramVanroy/xlm-roberta-base-hebban-reviews | a5e638594a47dedcab7ea92ff0b4e5be2e83c09b | 2022-07-29T09:43:04.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
]
| text-classification | false | BramVanroy | null | BramVanroy/xlm-roberta-base-hebban-reviews | 13 | null | transformers | 10,410 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: xlm-roberta-base-hebban-reviews
results:
- dataset:
config: filtered_sentiment
name: BramVanroy/hebban-reviews - filtered_sentiment - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.8094674556213017
- name: Test f1
type: f1
value: 0.812677483587223
- name: Test precision
type: precision
value: 0.8173602585519025
- name: Test qwk
type: qwk
value: 0.7369243423166991
- name: Test recall
type: recall
value: 0.8094674556213017
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# xlm-roberta-base-hebban-reviews
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_sentiment
- dataset_revision: 2.0.0
- labelcolumn: review_sentiment
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.741533273748008
- best_model_checkpoint: trained/hebban-reviews/xlm-roberta-base/checkpoint-2000
# Test results of best checkpoint
- accuracy: 0.8094674556213017
- f1: 0.812677483587223
- precision: 0.8173602585519025
- qwk: 0.7369243423166991
- recall: 0.8094674556213017
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 66294c815326c93682003119534cb72009f558c2
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
mmmmmmd/HSD | 054396bfe07336b5386237e323e95046cc9af18f | 2022-07-27T14:42:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mmmmmmd | null | mmmmmmd/HSD | 13 | null | transformers | 10,411 | Entry not found |
prubach/KnotProtSequencesModel | 0bfdecdf05c1eb7e65834e0ad059158348eec786 | 2022-07-28T08:26:15.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | prubach | null | prubach/KnotProtSequencesModel | 13 | null | transformers | 10,412 | Entry not found |
HCKLab/BiBert-Classification | d58bd98b518abf4563c507937056f03d3863fb43 | 2022-07-28T17:39:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | HCKLab | null | HCKLab/BiBert-Classification | 13 | null | transformers | 10,413 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BiBert-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiBert-Classification
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8646
- Accuracy: 0.3505
- Mae: 0.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Mae |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 1.1951 | 1.0 | 625 | 1.1807 | 0.484 | 0.608 |
| 1.0818 | 2.0 | 1250 | 1.2202 | 0.468 | 0.676 |
| 0.9926 | 3.0 | 1875 | 1.3529 | 0.475 | 0.663 |
| 0.7569 | 4.0 | 2500 | 1.4457 | 0.491 | 0.627 |
| 0.6374 | 5.0 | 3125 | 1.6097 | 0.483 | 0.655 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
johanna-k/bart-pw-seg | a31af3eaa03409510322fac9a844618a18373612 | 2022-07-28T18:08:20.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | johanna-k | null | johanna-k/bart-pw-seg | 13 | null | transformers | 10,414 | Entry not found |
tosin/dialogpt_afriwoz_pidgin | 28ed5a4ed224c629ac705f0cbb2d81f2dae81208 | 2022-07-29T08:41:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:AfriWOZ",
"arxiv:2204.08083",
"transformers",
"conversational",
"license:cc-by-4.0"
]
| conversational | false | tosin | null | tosin/dialogpt_afriwoz_pidgin | 13 | null | transformers | 10,415 | ---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
language:
- en
license: cc-by-4.0
tags:
- conversational
- transformers
datasets:
- AfriWOZ
metrics:
- perplexity
widget:
- text: "How I fit chop for here?"
---
## DialoGPT_AfriWOZ (Pidgin)
This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Nigeria Pidgin English language.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking.
The perplexity achieved on the validation set is 38.52.
* Generation example from an interactive environment:
|Role | Response |
|------|------------|
|User | I hear say restaurant dey here. |
|Bot | I dey find restaurant. |
|User | Abeg you fit tell me which kind chop dey? |
|Bot | I no dey sure right now. |
|User | You do well. Thank you. |
|Bot | I dey find restauarant. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_pidgin")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_pidgin")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_pidgin_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
ARTeLab/it5-summarization-fanpage | fe05e8892623219cb8b2cf1ccc925902cf562e9e | 2022-05-03T06:06:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
]
| summarization | false | ARTeLab | null | ARTeLab/it5-summarization-fanpage | 12 | 2 | transformers | 10,416 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_fanpage128
results: []
datasets:
- ARTeLab/fanpage
---
# summarization_fanpage128
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
# Citation
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
Adi2K/Priv-Consent | f14517d90a670e6dfb3614a489d7ea688f93ffe0 | 2021-09-24T12:53:04.000Z | [
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"transformers"
]
| text-classification | false | Adi2K | null | Adi2K/Priv-Consent | 12 | null | transformers | 10,417 | ---
language: eng
widget:
- text: "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."
datasets:
- Adi2K/autonlp-data-Priv-Consent
---
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Adi2K/autonlp-Priv-Consent-12592372
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Adi2K/autonlp-Priv-Consent-12592372", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | a528e34d5ca9e69f4b6b146f4514292a97cfaef4 | 2022-03-02T19:02:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CNT-UPenn | null | CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | 12 | null | transformers | 10,418 | emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing."
Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018
Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. |
CenIA/distillbert-base-spanish-uncased-finetuned-xnli | 8f647d2548b13362749fca727f54ee0cd14ca41b | 2021-12-08T22:24:16.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-xnli | 12 | null | transformers | 10,419 | Entry not found |
Chun/DialoGPT-large-dailydialog | f2a03f2d8fd148f22bc1b11807be55634393851f | 2021-08-08T22:31:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Chun | null | Chun/DialoGPT-large-dailydialog | 12 | null | transformers | 10,420 | Entry not found |
Crives/distilbert-base-uncased-finetuned-emotion | 337d701f659964a94f3bd1a0c598b5f3f4be1394 | 2022-02-09T22:08:11.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Crives | null | Crives/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,421 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215538311282218
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7814 | 1.0 | 250 | 0.3105 | 0.907 | 0.9046 |
| 0.2401 | 2.0 | 500 | 0.2175 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DSI/human-directed-sentiment | d32186640284ff82253ec1fdaee75cd9ba1e75fb | 2022-01-17T14:20:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DSI | null | DSI/human-directed-sentiment | 12 | null | transformers | 10,422 | ** Human-Directed Sentiment Analysis in Arabic
A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion. |
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese | dcc63c8ca28b9414b871fd2c256ebd000b36df80 | 2022-07-17T17:43:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:Common Voice",
"arxiv:2204.00618",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Edresson | null | Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese | 12 | 1 | transformers | 10,423 | ---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 20.39
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned with Common Voice and TTS-Portuguese Corpus in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using the Common Voice 7.0 and TTS-Portuguese Corpus.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common-Voice_plus_TTS-Dataset-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
EleutherAI/enformer-corr_coef_obj | 4aad70eea20892cb7e9c2d6f692a8277660bc0e8 | 2022-02-23T12:18:12.000Z | [
"pytorch",
"enformer",
"transformers",
"license:apache-2.0"
]
| null | false | EleutherAI | null | EleutherAI/enformer-corr_coef_obj | 12 | null | transformers | 10,424 | ---
license: apache-2.0
inference: false
---
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 3 days with sequence augmentations and pearson correlation objective.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` |
EnsarEmirali/distilbert-base-uncased-finetuned-emotion | 7d13520ba2e005fbc05dd35a810e032dc9c5473a | 2022-02-21T05:53:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | EnsarEmirali | null | EnsarEmirali/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,425 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9268984054036417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9265
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.2973 | 0.9125 | 0.9110 |
| 0.2418 | 2.0 | 500 | 0.2131 | 0.9265 | 0.9269 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Fengkai/distilbert-base-uncased-finetuned-emotion | e6243bb50bd2abc315d72be76fc526d7092f80d0 | 2022-01-25T02:11:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Fengkai | null | Fengkai/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,426 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9385
- name: F1
type: f1
value: 0.9383492808338979
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1495
- Accuracy: 0.9385
- F1: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1739 | 1.0 | 250 | 0.1827 | 0.931 | 0.9302 |
| 0.1176 | 2.0 | 500 | 0.1567 | 0.9325 | 0.9326 |
| 0.0994 | 3.0 | 750 | 0.1555 | 0.9385 | 0.9389 |
| 0.08 | 4.0 | 1000 | 0.1496 | 0.9445 | 0.9443 |
| 0.0654 | 5.0 | 1250 | 0.1495 | 0.9385 | 0.9383 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
GKLMIP/bert-tagalog-base-uncased | 67bb407fe500434512132f0835973d32b858478f | 2021-07-31T02:14:37.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | GKLMIP | null | GKLMIP/bert-tagalog-base-uncased | 12 | null | transformers | 10,427 | https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GKLMIP/electra-myanmar-base-uncased | ae53e2d14d6ce28cf9beda9f822ca4360a58493c | 2021-10-11T04:58:43.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | GKLMIP | null | GKLMIP/electra-myanmar-base-uncased | 12 | null | transformers | 10,428 | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
Gregor-Davies/DialoGPT-small-rick | bad252226ba9192e914f8812c5c811346642d31f | 2022-01-23T13:15:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"PyTorch",
"Transformers",
"lm-head",
"causal-lm"
]
| conversational | false | Gregor-Davies | null | Gregor-Davies/DialoGPT-small-rick | 12 | null | transformers | 10,429 | ---
tags:
- conversational
- PyTorch
- Transformers
- gpt2
- lm-head
- causal-lm
- text-generation
---
# rick and morty |
GroNLP/bert-base-dutch-cased-upos-alpino-frisian | 472aab6e53f0883a2efe2c7cb608043594e48f23 | 2021-05-18T20:22:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fy",
"arxiv:2105.02855",
"transformers",
"BERTje",
"pos",
"autotrain_compatible"
]
| token-classification | false | GroNLP | null | GroNLP/bert-base-dutch-cased-upos-alpino-frisian | 12 | null | transformers | 10,430 | ---
language: fy
tags:
- BERTje
- pos
---
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling
# Adapting Monolingual Models: Data can be Scarce when Language Similarity is High
This model is part of this paper + code:
- 📝 [Paper](https://arxiv.org/abs/2105.02855)
- 💻 [Code](https://github.com/wietsedv/low-resource-adapt)
## Models
The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub:
### Lexical layers
These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`).
- 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language)
- 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian)
### POS tagging
These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above.
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings)
- 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
|
HScomcom/gpt2-game-of-thrones | 481371376135f570a8c1a4681ccddede9f305acb | 2021-05-21T10:28:34.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | HScomcom | null | HScomcom/gpt2-game-of-thrones | 12 | null | transformers | 10,431 | Entry not found |
Hate-speech-CNERG/deoffxlmr-mono-kannada | b4845d6249e4d748f6c3f589c2ddb447389b739d | 2021-09-25T14:01:14.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"kn",
"transformers",
"license:apache-2.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/deoffxlmr-mono-kannada | 12 | null | transformers | 10,432 | ---
language: kn
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Helsinki-NLP/opus-mt-afa-afa | 550470c96e6880da1c03997a4c1065bf58acdc93 | 2021-01-18T07:46:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-afa-afa | 12 | null | transformers | 10,433 | ---
language:
- so
- ti
- am
- he
- mt
- ar
- afa
tags:
- translation
license: apache-2.0
---
### afa-afa
* source group: Afro-Asiatic languages
* target group: Afro-Asiatic languages
* OPUS readme: [afa-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md)
* model: transformer
* source language(s): apc ara arq arz heb kab mlt shy_Latn thv
* target language(s): apc ara arq arz heb kab mlt shy_Latn thv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.3 | 0.148 |
| Tatoeba-test.ara-heb.ara.heb | 31.9 | 0.525 |
| Tatoeba-test.ara-kab.ara.kab | 0.3 | 0.120 |
| Tatoeba-test.ara-mlt.ara.mlt | 14.0 | 0.428 |
| Tatoeba-test.ara-shy.ara.shy | 1.3 | 0.050 |
| Tatoeba-test.heb-ara.heb.ara | 17.0 | 0.464 |
| Tatoeba-test.heb-kab.heb.kab | 1.9 | 0.104 |
| Tatoeba-test.kab-ara.kab.ara | 0.3 | 0.044 |
| Tatoeba-test.kab-heb.kab.heb | 5.1 | 0.099 |
| Tatoeba-test.kab-shy.kab.shy | 2.2 | 0.009 |
| Tatoeba-test.kab-tmh.kab.tmh | 10.7 | 0.007 |
| Tatoeba-test.mlt-ara.mlt.ara | 29.1 | 0.498 |
| Tatoeba-test.multi.multi | 20.8 | 0.434 |
| Tatoeba-test.shy-ara.shy.ara | 1.2 | 0.053 |
| Tatoeba-test.shy-kab.shy.kab | 2.0 | 0.134 |
| Tatoeba-test.tmh-kab.tmh.kab | 0.0 | 0.047 |
### System Info:
- hf_name: afa-afa
- source_languages: afa
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt
- src_alpha3: afa
- tgt_alpha3: afa
- short_pair: afa-afa
- chrF2_score: 0.434
- bleu: 20.8
- brevity_penalty: 1.0
- ref_len: 15215.0
- src_name: Afro-Asiatic languages
- tgt_name: Afro-Asiatic languages
- train_date: 2020-07-26
- src_alpha2: afa
- tgt_alpha2: afa
- prefer_old: False
- long_pair: afa-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-nl | 5ba066fbeb93263925e2bb5b12e9e05cf03a9d32 | 2021-01-18T07:53:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ca",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ca-nl | 12 | null | transformers | 10,434 | ---
language:
- ca
- nl
tags:
- translation
license: apache-2.0
---
### cat-nld
* source group: Catalan
* target group: Dutch
* OPUS readme: [cat-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.nld | 45.1 | 0.632 |
### System Info:
- hf_name: cat-nld
- source_languages: cat
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'nl']
- src_constituents: {'cat'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: nld
- short_pair: ca-nl
- chrF2_score: 0.632
- bleu: 45.1
- brevity_penalty: 0.965
- ref_len: 4157.0
- src_name: Catalan
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: nl
- prefer_old: False
- long_pair: cat-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ceb-sv | bf1810fb698cbeb2a7beeecb96917557ece3158f | 2021-09-09T21:28:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ceb",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ceb-sv | 12 | null | transformers | 10,435 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ceb-sv
* source languages: ceb
* target languages: sv
* OPUS readme: [ceb-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.sv | 35.5 | 0.552 |
|
Helsinki-NLP/opus-mt-de-bzs | 30ed515b4d391e1f98cefdbf5f6fcc340c979fce | 2021-09-09T21:30:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-bzs | 12 | null | transformers | 10,436 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-bzs
* source languages: de
* target languages: bzs
* OPUS readme: [de-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.bzs | 21.0 | 0.389 |
|
Helsinki-NLP/opus-mt-de-is | 5da3816233444156514c12635c92dda7fc16b01c | 2021-01-18T08:00:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"is",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-is | 12 | null | transformers | 10,437 | ---
language:
- de
- is
tags:
- translation
license: apache-2.0
---
### deu-isl
* source group: German
* target group: Icelandic
* OPUS readme: [deu-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-isl/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): isl
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.isl | 27.1 | 0.533 |
### System Info:
- hf_name: deu-isl
- source_languages: deu
- target_languages: isl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-isl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'is']
- src_constituents: {'deu'}
- tgt_constituents: {'isl'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-isl/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: isl
- short_pair: de-is
- chrF2_score: 0.5329999999999999
- bleu: 27.1
- brevity_penalty: 0.9620000000000001
- ref_len: 5939.0
- src_name: German
- tgt_name: Icelandic
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: is
- prefer_old: False
- long_pair: deu-isl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-bzs | 2b7c7d345202d17dd7f42850eae846e4d11b6fda | 2021-09-09T21:34:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-bzs | 12 | null | transformers | 10,438 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-bzs
* source languages: en
* target languages: bzs
* OPUS readme: [en-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bzs | 43.4 | 0.612 |
|
Helsinki-NLP/opus-mt-en-efi | 08b5f78e0bb66e8e1940fe1eb976a5b9de276f84 | 2021-09-09T21:35:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-efi | 12 | null | transformers | 10,439 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-efi
* source languages: en
* target languages: efi
* OPUS readme: [en-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.efi | 38.0 | 0.568 |
|
Helsinki-NLP/opus-mt-en-kwn | 3736240f67ae9d9b6afdc6ee9026f1ec96dc4828 | 2021-09-09T21:36:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"kwn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-kwn | 12 | null | transformers | 10,440 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kwn
* source languages: en
* target languages: kwn
* OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kwn | 27.6 | 0.513 |
|
Helsinki-NLP/opus-mt-en-kwy | 9735cf314a7647932c7db4d1598f89ddabed5ce1 | 2021-09-09T21:36:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"kwy",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-kwy | 12 | null | transformers | 10,441 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kwy
* source languages: en
* target languages: kwy
* OPUS readme: [en-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kwy | 33.6 | 0.543 |
|
Helsinki-NLP/opus-mt-en-loz | 2d718169c4ec0446b59e50bbc60e9bcc8536ef79 | 2021-09-09T21:37:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"loz",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-loz | 12 | null | transformers | 10,442 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-loz
* source languages: en
* target languages: loz
* OPUS readme: [en-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.loz | 40.1 | 0.596 |
|
Helsinki-NLP/opus-mt-en-lue | e545785d78e4a6541363734bdea4efe8e230cdfa | 2021-09-09T21:37:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lue",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-lue | 12 | null | transformers | 10,443 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lue
* source languages: en
* target languages: lue
* OPUS readme: [en-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lue | 30.1 | 0.558 |
|
Helsinki-NLP/opus-mt-en-lus | 55f4acfa42dd6fa4152c625b620c7861951a5a56 | 2021-09-09T21:37:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lus",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-lus | 12 | null | transformers | 10,444 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lus
* source languages: en
* target languages: lus
* OPUS readme: [en-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lus | 36.8 | 0.581 |
|
Helsinki-NLP/opus-mt-en-st | c626d33dd89c6e5da348b773562849c5b50bc788 | 2021-09-09T21:39:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"st",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-st | 12 | null | transformers | 10,445 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-st
* source languages: en
* target languages: st
* OPUS readme: [en-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.st | 49.8 | 0.665 |
|
Helsinki-NLP/opus-mt-en-ti | 55151ff82a6dcd684b0bfc61a0f02aab6c9a89f6 | 2021-09-09T21:39:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ti",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ti | 12 | null | transformers | 10,446 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ti
* source languages: en
* target languages: ti
* OPUS readme: [en-ti](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ti/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ti | 25.3 | 0.382 |
|
Helsinki-NLP/opus-mt-en-toi | ebe551da7af43d6f47fdebb52e09903e2f679a06 | 2021-09-09T21:40:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"toi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-toi | 12 | null | transformers | 10,447 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-toi
* source languages: en
* target languages: toi
* OPUS readme: [en-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-toi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.toi | 32.8 | 0.598 |
|
Helsinki-NLP/opus-mt-en-tpi | 270027882571dc2d5528cdfa18527ffcc0f1908e | 2021-09-09T21:40:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tpi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tpi | 12 | null | transformers | 10,448 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tpi
* source languages: en
* target languages: tpi
* OPUS readme: [en-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tpi | 38.7 | 0.568 |
|
Helsinki-NLP/opus-mt-en-zle | f3ca937ee4037e9d06e7fee5a6500e49a09b8b1b | 2021-01-18T08:19:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"be",
"ru",
"uk",
"zle",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-zle | 12 | null | transformers | 10,449 | ---
language:
- en
- be
- ru
- uk
- zle
tags:
- translation
license: apache-2.0
---
### eng-zle
* source group: English
* target group: East Slavic languages
* OPUS readme: [eng-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md)
* model: transformer
* source language(s): eng
* target language(s): bel bel_Latn orv_Cyrl rue rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-engrus.eng.rus | 27.4 | 0.550 |
| newstest2013-engrus.eng.rus | 21.4 | 0.493 |
| newstest2015-enru-engrus.eng.rus | 24.2 | 0.534 |
| newstest2016-enru-engrus.eng.rus | 23.3 | 0.518 |
| newstest2017-enru-engrus.eng.rus | 25.3 | 0.541 |
| newstest2018-enru-engrus.eng.rus | 22.4 | 0.527 |
| newstest2019-enru-engrus.eng.rus | 24.1 | 0.505 |
| Tatoeba-test.eng-bel.eng.bel | 20.8 | 0.471 |
| Tatoeba-test.eng.multi | 37.2 | 0.580 |
| Tatoeba-test.eng-orv.eng.orv | 0.6 | 0.130 |
| Tatoeba-test.eng-rue.eng.rue | 1.4 | 0.168 |
| Tatoeba-test.eng-rus.eng.rus | 41.3 | 0.616 |
| Tatoeba-test.eng-ukr.eng.ukr | 38.7 | 0.596 |
### System Info:
- hf_name: eng-zle
- source_languages: eng
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'be', 'ru', 'uk', 'zle']
- src_constituents: {'eng'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zle
- short_pair: en-zle
- chrF2_score: 0.58
- bleu: 37.2
- brevity_penalty: 0.9890000000000001
- ref_len: 63493.0
- src_name: English
- tgt_name: East Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zle
- prefer_old: False
- long_pair: eng-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-csg | 3da715e725eefec43cca36fbe6cc492ff8f63f06 | 2021-09-09T21:41:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"csg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-csg | 12 | null | transformers | 10,450 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-csg
* source languages: es
* target languages: csg
* OPUS readme: [es-csg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csg | 91.2 | 0.937 |
|
Helsinki-NLP/opus-mt-es-gl | 28b88b37e53fcf5a25bb6954fda100a8944a6077 | 2021-01-18T08:24:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"gl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-gl | 12 | null | transformers | 10,451 | ---
language:
- es
- gl
tags:
- translation
license: apache-2.0
---
### spa-glg
* source group: Spanish
* target group: Galician
* OPUS readme: [spa-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.glg | 67.6 | 0.808 |
### System Info:
- hf_name: spa-glg
- source_languages: spa
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'gl']
- src_constituents: {'spa'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt
- src_alpha3: spa
- tgt_alpha3: glg
- short_pair: es-gl
- chrF2_score: 0.8079999999999999
- bleu: 67.6
- brevity_penalty: 0.993
- ref_len: 16581.0
- src_name: Spanish
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: es
- tgt_alpha2: gl
- prefer_old: False
- long_pair: spa-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-pis | e2484443c27300324e8275d0d111578aa11181f6 | 2021-09-09T21:44:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pis",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-pis | 12 | null | transformers | 10,452 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pis
* source languages: es
* target languages: pis
* OPUS readme: [es-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pis/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pis | 27.1 | 0.484 |
|
Helsinki-NLP/opus-mt-es-sg | ae0ad1a6196547d8d1b233e6b13146f85b5206a2 | 2021-09-09T21:44:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"sg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-sg | 12 | null | transformers | 10,453 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-sg
* source languages: es
* target languages: sg
* OPUS readme: [es-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-sg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.sg | 24.8 | 0.435 |
|
Helsinki-NLP/opus-mt-es-wls | d7b41426d3fe16a6bafceae20493dff14eff28bb | 2021-09-09T21:45:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"wls",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-wls | 12 | null | transformers | 10,454 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-wls
* source languages: es
* target languages: wls
* OPUS readme: [es-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.wls | 22.9 | 0.437 |
|
Helsinki-NLP/opus-mt-et-es | 0a50f8fdda109247805282e5d4b8860b9e1b8154 | 2021-09-09T21:46:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-es | 12 | null | transformers | 10,455 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-et-es
* source languages: et
* target languages: es
* OPUS readme: [et-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.et.es | 27.2 | 0.490 |
|
Helsinki-NLP/opus-mt-et-fi | a63d43b6d9674a26d9fec2637cfba503b7f1d186 | 2021-09-09T21:46:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-fi | 12 | null | transformers | 10,456 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-et-fi
* source languages: et
* target languages: fi
* OPUS readme: [et-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.et.fi | 26.6 | 0.546 |
|
Helsinki-NLP/opus-mt-fi-bcl | 2d626dd80f23811869dfd49984fc519a3f0ebc18 | 2021-09-09T21:46:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"bcl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-bcl | 12 | null | transformers | 10,457 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-bcl
* source languages: fi
* target languages: bcl
* OPUS readme: [fi-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-bcl/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-bcl/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-bcl/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.bcl | 38.4 | 0.604 |
|
Helsinki-NLP/opus-mt-fi-ht | 3b5291b5e5ee468d27e12bbaa6ae12c89331d57b | 2021-09-09T21:48:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ht",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ht | 12 | null | transformers | 10,458 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ht
* source languages: fi
* target languages: ht
* OPUS readme: [fi-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ht/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ht/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ht/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.ht | 27.1 | 0.453 |
|
Helsinki-NLP/opus-mt-fi-lue | 49ac1ffcb47e11b9f3b38e34375e916348046245 | 2021-09-09T21:49:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"lue",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-lue | 12 | null | transformers | 10,459 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-lue
* source languages: fi
* target languages: lue
* OPUS readme: [fi-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-lue/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lue/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lue/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.lue | 22.4 | 0.497 |
|
Helsinki-NLP/opus-mt-fi-uk | ed4d5d8561fac3e7c7bf4507ea0478264febba3a | 2021-09-09T21:52:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-uk | 12 | null | transformers | 10,460 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-uk
* source languages: fi
* target languages: uk
* OPUS readme: [fi-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.uk | 23.3 | 0.445 |
|
Helsinki-NLP/opus-mt-fi-xh | 83167df35732d9f9ea14e52e962ca38eab391cc9 | 2021-09-09T21:52:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"xh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-xh | 12 | null | transformers | 10,461 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-xh
* source languages: fi
* target languages: xh
* OPUS readme: [fi-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.xh | 25.3 | 0.554 |
|
Helsinki-NLP/opus-mt-fr-eo | 8b7be50a4d7f9b9b4fa3f4773a7275be4a85d4d8 | 2021-09-09T21:53:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-eo | 12 | null | transformers | 10,462 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-eo
* source languages: fr
* target languages: eo
* OPUS readme: [fr-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.eo | 52.0 | 0.695 |
|
Helsinki-NLP/opus-mt-fr-fj | 11ef0862b115e52fd35adbbab5dd699305445918 | 2021-09-09T21:53:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"fj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-fj | 12 | null | transformers | 10,463 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-fj
* source languages: fr
* target languages: fj
* OPUS readme: [fr-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.fj | 27.4 | 0.487 |
|
Helsinki-NLP/opus-mt-fr-pag | 04af4a70733cb6865afca6054717279948ffc7f4 | 2021-09-09T21:55:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"pag",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-pag | 12 | null | transformers | 10,464 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-pag
* source languages: fr
* target languages: pag
* OPUS readme: [fr-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pag | 27.0 | 0.486 |
|
Helsinki-NLP/opus-mt-he-de | ce12c832a6a4b2547aa8d6bca659671007383a91 | 2021-09-09T22:00:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-de | 12 | null | transformers | 10,465 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-he-de
* source languages: he
* target languages: de
* OPUS readme: [he-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.he.de | 45.5 | 0.647 |
|
Helsinki-NLP/opus-mt-hil-de | ff219abfd5acc4869cd501278784330723d0bf0c | 2021-09-09T22:09:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hil",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hil-de | 12 | null | transformers | 10,466 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hil-de
* source languages: hil
* target languages: de
* OPUS readme: [hil-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.de | 26.4 | 0.479 |
|
Helsinki-NLP/opus-mt-ilo-de | d04ede2f91302b77cf3475ceb18021ab5a8b0535 | 2021-09-09T22:11:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ilo",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ilo-de | 12 | null | transformers | 10,467 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ilo-de
* source languages: ilo
* target languages: de
* OPUS readme: [ilo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.de | 26.1 | 0.474 |
|
Helsinki-NLP/opus-mt-iso-fi | b2de5164c3ba201060be53113515da8594ad7f8a | 2021-09-10T13:52:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"iso",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-iso-fi | 12 | null | transformers | 10,468 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-iso-fi
* source languages: iso
* target languages: fi
* OPUS readme: [iso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.fi | 23.0 | 0.443 |
|
Helsinki-NLP/opus-mt-ja-bg | a40d942964fa268c4f8db4df3f4f6645237c3c6c | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-bg | 12 | null | transformers | 10,469 | ---
language:
- ja
- bg
tags:
- translation
license: apache-2.0
---
### jpn-bul
* source group: Japanese
* target group: Bulgarian
* OPUS readme: [jpn-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md)
* model: transformer-align
* source language(s): jpn jpn_Hani jpn_Hira jpn_Kana
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.bul | 20.2 | 0.422 |
### System Info:
- hf_name: jpn-bul
- source_languages: jpn
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'bg']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: bul
- short_pair: ja-bg
- chrF2_score: 0.42200000000000004
- bleu: 20.2
- brevity_penalty: 0.9570000000000001
- ref_len: 2346.0
- src_name: Japanese
- tgt_name: Bulgarian
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: bg
- prefer_old: False
- long_pair: jpn-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ko-fi | b6e5f3dbcac05865c284ed6b04a4a89bd29af799 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ko",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ko-fi | 12 | null | transformers | 10,470 | ---
language:
- ko
- fi
tags:
- translation
license: apache-2.0
---
### kor-fin
* source group: Korean
* target group: Finnish
* OPUS readme: [kor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md)
* model: transformer-align
* source language(s): kor kor_Hang kor_Latn
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kor.fin | 26.6 | 0.502 |
### System Info:
- hf_name: kor-fin
- source_languages: kor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ko', 'fi']
- src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt
- src_alpha3: kor
- tgt_alpha3: fin
- short_pair: ko-fi
- chrF2_score: 0.502
- bleu: 26.6
- brevity_penalty: 0.892
- ref_len: 2251.0
- src_name: Korean
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: ko
- tgt_alpha2: fi
- prefer_old: False
- long_pair: kor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ln-de | d16b4910aa75a3e5ecbf2b7a5c4000296e7464ce | 2021-09-10T13:54:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ln",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ln-de | 12 | null | transformers | 10,471 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ln-de
* source languages: ln
* target languages: de
* OPUS readme: [ln-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ln.de | 23.3 | 0.428 |
|
Helsinki-NLP/opus-mt-ro-fi | b10215c217387d0590a220034266dbc9bb8f4881 | 2021-09-10T14:02:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ro",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ro-fi | 12 | null | transformers | 10,472 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.fi | 25.2 | 0.521 |
|
Helsinki-NLP/opus-mt-sv-crs | 746041e0a24672d2655538f87bc1ece532fd34d3 | 2021-09-10T14:05:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"crs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-crs | 12 | null | transformers | 10,473 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-crs
* source languages: sv
* target languages: crs
* OPUS readme: [sv-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.crs | 32.4 | 0.512 |
|
Helsinki-NLP/opus-mt-sv-pon | 56bff87cd0741f8e4374ae98f9ed7a64a716342e | 2021-09-10T14:08:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"pon",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-pon | 12 | null | transformers | 10,474 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-pon
* source languages: sv
* target languages: pon
* OPUS readme: [sv-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.pon | 26.0 | 0.491 |
|
Helsinki-NLP/opus-mt-sv-ru | 7f9e131a87630ee3aa68458451071e9ad54cfa47 | 2021-09-10T14:09:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ru | 12 | null | transformers | 10,475 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ru
* source languages: sv
* target languages: ru
* OPUS readme: [sv-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sv.ru | 46.6 | 0.662 |
|
Helsinki-NLP/opus-mt-tum-en | d03aecd4ed2ae3e6530b83e88f327b27ed3eb84d | 2021-09-11T10:50:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tum",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tum-en | 12 | null | transformers | 10,476 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tum-en
* source languages: tum
* target languages: en
* OPUS readme: [tum-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.en | 31.7 | 0.470 |
|
Helsinki-NLP/opus-mt-tw-fr | f146f6c094ec6dfed8e910455b5e6b99fd5418ee | 2021-09-11T10:50:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tw",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tw-fr | 12 | null | transformers | 10,477 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tw-fr
* source languages: tw
* target languages: fr
* OPUS readme: [tw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.fr | 26.7 | 0.442 |
|
Helsinki-NLP/opus-mt-uk-fr | 3406d471a8b7c0f83e3d54ecac9bcb7fee7ee0bd | 2021-09-11T10:51:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"uk",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-uk-fr | 12 | null | transformers | 10,478 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-uk-fr
* source languages: uk
* target languages: fr
* OPUS readme: [uk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.uk.fr | 52.1 | 0.681 |
|
Jeska/VaccinChatSentenceClassifierDutch | 2183d3feb33082cb2ab9cf07c20b7695e50dd4bb | 2021-11-18T17:18:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeska | null | Jeska/VaccinChatSentenceClassifierDutch | 12 | null | transformers | 10,479 | Entry not found |
JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector | b146b6ca4f449d275c69705e6232823670dca16e | 2021-10-10T18:37:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"es",
"dataset:catalonia_independence",
"transformers",
"spanish",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector | 12 | 1 | transformers | 10,480 | ---
license: apache-2.0
language: es
tags:
- "spanish"
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: spanish
metrics:
- name: Accuracy
type: accuracy
value: 0.7880893300248138
widget:
- text: "Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
- text: "Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
---
# roberta-base-bne-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.7881
<details>
## Model description
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 378 | 0.5534 | 0.7558 |
| 0.6089 | 2.0 | 756 | 0.5315 | 0.7643 |
| 0.2678 | 3.0 | 1134 | 0.7336 | 0.7816 |
| 0.0605 | 4.0 | 1512 | 0.8809 | 0.7866 |
| 0.0605 | 5.0 | 1890 | 0.9415 | 0.7881 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9936726093292236}]
independence_analysis(
"El desafío independentista queda adormecido, y eso que el Gobierno ha sido muy claro en que su propuesta para Cataluña es una agenda de reencuentro, centrada en inversiones e infraestructuras")
# Output:
[{'label': 'AGAINST', 'score': 0.7508948445320129}]
independence_analysis(
"Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.9966907501220703}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(SPANISH).ipynb#scrollTo=uNMOXJz38W6U)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) |
JonatanGk/roberta-base-bne-finetuned-hate-speech-offensive-spanish | b9c846b023ede70b5863b8dbec3f8a6abfadbd6f | 2021-10-18T17:10:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-bne-finetuned-hate-speech-offensive-spanish | 12 | null | transformers | 10,481 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2869
- Accuracy: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3222 | 1.0 | 1255 | 0.2869 | 0.9012 |
| 0.2418 | 2.0 | 2510 | 0.3125 | 0.8987 |
| 0.1726 | 3.0 | 3765 | 0.4120 | 0.8943 |
| 0.0685 | 4.0 | 5020 | 0.5239 | 0.8919 |
| 0.0245 | 5.0 | 6275 | 0.5910 | 0.8947 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | 44418a48fb8ed75b5220c87e8a98b544ec23214c | 2021-10-10T18:38:15.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"ca",
"dataset:catalonia_independence",
"transformers",
"catalan",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | 12 | 1 | transformers | 10,482 | ---
license: apache-2.0
language: ca
tags:
- "catalan"
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: catalan
metrics:
- name: Accuracy
type: accuracy
value: 0.7611940298507462
widget:
- text: "Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
- text: "Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
- text: "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
---
# roberta-base-ca-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7612
<details>
## Training and evaluation data
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 377 | 0.6311 | 0.7453 |
| 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 |
| 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 |
| 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 |
| 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
)
# Output:
[{'label': 'AGAINST', 'score': 0.7457581758499146}]
independence_analysis(
"Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.7436802983283997}]
independence_analysis(
"Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9040119647979736}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) |
KBLab/megatron-bert-base-swedish-cased-600k | 540051f7da73debe8e3b38e6bb11060820f0eefa | 2022-03-17T11:11:13.000Z | [
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | KBLab | null | KBLab/megatron-bert-base-swedish-cased-600k | 12 | null | transformers | 10,483 | ---
language:
- sv
---
# Megatron-BERT-base Swedish 600k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 600k training steps. Its [sister model](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k) used the same setup, but was instead trained for only 125k steps.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). |
KETI-AIR/ke-t5-small-newslike | 292abf1540533590a6eb01550ccedf854392e7cc | 2021-06-23T03:12:48.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | KETI-AIR | null | KETI-AIR/ke-t5-small-newslike | 12 | null | transformers | 10,484 | Entry not found |
SI2M-Lab/DarijaBERT-arabizi | 9b419fa5da3aba612a5b2b7c8131b66e8515ad2e | 2021-12-27T08:41:53.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | SI2M-Lab | null | SI2M-Lab/DarijaBERT-arabizi | 12 | null | transformers | 10,485 | ---
language: ar
widget:
- text: " Mchit njib [MASK] ."
- text: " Yak nta li [MASK] lih dik lhedra."
- text: " Ach [MASK] daba."
- text: " Lmghrib ajmal [MASK] fl3alam."
---
AIOX Lab and SI2M Lab INSEA have joined forces to offer researchers, industrialists and the NLP (Natural Language Processing) community the first intelligent Open Source system that understands Moroccan dialectal language "Darija".
**DarijaBERT** is the first BERT model for the Moroccan Arabic dialect called “Darija”. It is based on the same architecture as BERT-base, but without the Next Sentence Prediction (NSP) objective. This model is the Arabizi specific version of DarijaBERT and it was trained on a total of ~4.6 Million sequences of Darija dialect written in Latin letters.
The model was trained on a dataset issued from Youtube comments.
More details about DarijaBert are available in the dedicated GitHub [repository](https://github.com/AIOXLABS/DBert)
**Loading the model**
The model can be loaded directly using the Huggingface library:
```python
from transformers import AutoTokenizer, AutoModel
DarijaBERT_tokenizer = AutoTokenizer.from_pretrained("Kamel/DarijaBERT-arabizi")
DarijaBert_model = AutoModel.from_pretrained("Kamel/DarijaBERT-arabizi")
```
**Acknowledgments**
We gratefully acknowledge Google’s TensorFlow Research Cloud (TRC) program for providing us with free Cloud TPUs.
<font size =2>**Warning**
This model being trained on texts from social networks, it can unfortunately generate toxic outputs reflecting part of the learned data</font>
|
KoichiYasuoka/bert-large-japanese-char-extended | b10e7f6b01689eb567fcd380f4afefdf509c527c | 2022-06-21T07:51:33.000Z | [
"pytorch",
"bert",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-japanese-char-extended | 12 | null | transformers | 10,486 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "wikipedia"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "酸素ボンベを充[MASK]する。"
---
# bert-large-japanese-char-extended
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-large-japanese-char](https://huggingface.co/cl-tohoku/bert-large-japanese-char). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-large-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-large-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-large-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-large-japanese-char-extended")
```
|
KoichiYasuoka/roberta-base-thai-char-upos | d7c8222db8ac8d3c3cedf63dc4fd06a15b4c88a6 | 2022-04-12T10:26:40.000Z | [
"pytorch",
"roberta",
"token-classification",
"th",
"dataset:universal_dependencies",
"transformers",
"thai",
"pos",
"wikipedia",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-thai-char-upos | 12 | null | transformers | 10,487 | ---
language:
- "th"
tags:
- "thai"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "หลายหัวดีกว่าหัวเดียว"
---
# roberta-base-thai-char-upos
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [roberta-base-thai-char](https://huggingface.co/KoichiYasuoka/roberta-base-thai-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-thai-char-upos")
s="หลายหัวดีกว่าหัวเดียว"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-thai-char-upos")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
M-FAC/bert-tiny-finetuned-qnli | 00679507fb7104e655fdf5899bf8c222866bb1b0 | 2021-12-13T08:11:40.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-tiny-finetuned-qnli | 12 | null | transformers | 10,488 | # BERT-tiny model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 81.54
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 77.85 ± 0.15 |
| M-FAC | 81.17 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M47Labs/english_news_classification_headlines | da62e942b493887691e2a4adb3e70dceff2e4402 | 2021-09-08T15:03:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | M47Labs | null | M47Labs/english_news_classification_headlines | 12 | null | transformers | 10,489 | Entry not found |
Maltehb/aelaectra-danish-electra-small-uncased-ner-dane | 419bf45f8dc725fe4d902a44c160a77039fe086e | 2021-08-03T05:06:18.000Z | [
"pytorch",
"tf",
"electra",
"token-classification",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"transformers",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | Maltehb | null | Maltehb/aelaectra-danish-electra-small-uncased-ner-dane | 12 | null | transformers | 10,490 | ---
language: "da"
tags:
- ælæctra
- pytorch
- danish
- ELECTRA-Small
- replaced token detection
license: "mit"
datasets:
- DAGW
widget:
- text: "Chili Jensen, som bor på Danmarksgade 12, køber chilifrugter fra Netto."
metrics:
- f1
---
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load the finetuned Ælæctra-uncased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ |
Media1129/keyword-tag-model-10000-9-16_more_ingredient | 0b1eb8f94c69552ca33e6fe6387d3017737eeaf8 | 2021-09-17T02:47:19.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-10000-9-16_more_ingredient | 12 | null | transformers | 10,491 | Entry not found |
Media1129/keyword-tag-model-3000-v2 | d9e56eb5089d949de40972cd6c713ff8029cc9b9 | 2021-08-30T05:40:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-3000-v2 | 12 | null | transformers | 10,492 | Entry not found |
Media1129/keyword-tag-model-6000 | fbb43bf22032e5cb0c37d4e9b2c8fe4f6a7ac85e | 2021-08-30T05:15:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-6000 | 12 | null | transformers | 10,493 | Entry not found |
Media1129/recipe-tag-model | 1820b59ad9c11d7aefa8734017c0ff0d75a3e7eb | 2021-08-04T04:16:59.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/recipe-tag-model | 12 | null | transformers | 10,494 | Entry not found |
MhF/distilbert-base-uncased-finetuned-emotion | 3cf065757b4ddbcb8e5d1d0fd05cb21c6b1161f4 | 2022-02-15T05:38:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | MhF | null | MhF/distilbert-base-uncased-finetuned-emotion | 12 | null | transformers | 10,495 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217985126397109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2232
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 |
| 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Momerio/meigen_generate_Japanese | 43a89f3fcd45816b2da0582b994f5876ed839e79 | 2021-10-26T01:19:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ja",
"transformers"
]
| text-generation | false | Momerio | null | Momerio/meigen_generate_Japanese | 12 | null | transformers | 10,496 | ---
language:
- ja
---
名言推論モデル
|
NDugar/v3large-2epoch | 5e4ff2a7485e0001613a487173b17f17ee809d4f | 2021-12-06T19:28:46.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
]
| zero-shot-classification | false | NDugar | null | NDugar/v3large-2epoch | 12 | null | transformers | 10,497 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
Narsil/gpt2 | dad46dea5b8771c8fb31415c3dfce523ae8bae36 | 2021-06-22T15:04:20.000Z | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"en",
"transformers",
"exbert",
"license:mit"
]
| text-generation | false | Narsil | null | Narsil/gpt2 | 12 | null | transformers | 10,498 | ---
language: en
tags:
- exbert
license: mit
pipeline_tag: text-generation
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\\'m a language model, not a language model"\
\
The concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
\t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Navya2608/DialoGPT-medium-rachel | a4324e4780acd290f17d42d880a37607390a57d3 | 2021-11-05T16:35:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Navya2608 | null | Navya2608/DialoGPT-medium-rachel | 12 | null | transformers | 10,499 | ---
tags:
- conversational
---
# Rachel Green DialoGPT Model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.