modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingtweets/amityexploder | 3840356e2a60fdda8d98f8383bc006935bbf4313 | 2022-07-15T15:22:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/amityexploder | 9 | null | transformers | 12,800 | ---
language: en
thumbnail: http://www.huggingtweets.com/amityexploder/1657898522848/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544626436899852289/QMNNiqFg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SOPPY</div>
<div style="text-align: center; font-size: 14px;">@amityexploder</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SOPPY.
| Data | SOPPY |
| --- | --- |
| Tweets downloaded | 2832 |
| Retweets | 102 |
| Short tweets | 574 |
| Tweets kept | 2156 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16wq3mtu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @amityexploder's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rcycu202) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rcycu202/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/amityexploder')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner-full | 44a17bbe16338d7548cb9246c9484d5c01992d87 | 2022-07-15T21:22:23.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:toydata",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kinanmartin | null | kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner-full | 9 | null | transformers | 12,801 | ---
tags:
- generated_from_trainer
datasets:
- toydata
model-index:
- name: xlm-roberta-large-ner-hrl-finetuned-ner-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-ner-hrl-finetuned-ner-full
This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lvxue/distilled_mt5-base_10epoch | 6adef3e25bf3d5456072b21ee654faf6e085a7cb | 2022-07-18T07:28:03.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Lvxue | null | Lvxue/distilled_mt5-base_10epoch | 9 | null | transformers | 12,802 | Entry not found |
KoichiYasuoka/deberta-base-thai-ud-head | e58dcf10f6f8dda70425818bd8fe6c9d8d500435 | 2022-07-20T03:52:02.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"th",
"dataset:universal_dependencies",
"transformers",
"thai",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-thai-ud-head | 9 | null | transformers | 12,803 | ---
language:
- "th"
tags:
- "thai"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "question-answering"
widget:
- text: "กว่า"
context: "หลายหัวดีกว่าหัวเดียว"
- text: "หลาย"
context: "หลายหัวดีกว่าหัวเดียว"
- text: "หัว"
context: "หลาย[MASK]ดีกว่าหัวเดียว"
---
# deberta-base-thai-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Thai Wikipedia texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [deberta-base-thai](https://huggingface.co/KoichiYasuoka/deberta-base-thai). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-thai-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-thai-ud-head")
question="กว่า"
context="หลายหัวดีกว่าหัวเดียว"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-thai-ud-head")
print(nlp("หลายหัวดีกว่าหัวเดียว"))
```
|
respect5716/koenbert-dev | 12bf8126820cdb16b51e33659a01613a6baea20e | 2022-07-17T07:54:58.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | respect5716 | null | respect5716/koenbert-dev | 9 | null | transformers | 12,804 | Entry not found |
Ankhitan/1000-model1 | 24273e853ade712112ee6b17fef4ca10a460f077 | 2022-07-17T17:48:49.000Z | [
"pytorch",
"segformer",
"transformers"
]
| null | false | Ankhitan | null | Ankhitan/1000-model1 | 9 | null | transformers | 12,805 | Entry not found |
raisinbl/distilbert-base-uncased-finetuned-squad_2_512_1 | 168972faad1303459a46c6408073fd1aeaab83a3 | 2022-07-19T12:38:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | raisinbl | null | raisinbl/distilbert-base-uncased-finetuned-squad_2_512_1 | 9 | null | transformers | 12,806 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad_2_512_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_2_512_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2681 | 1.0 | 4079 | 1.2434 |
| 1.0223 | 2.0 | 8158 | 1.3153 |
| 0.865 | 3.0 | 12237 | 1.3225 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Kayvane/distilroberta-base-wandb-week-3-complaints-classifier-1024 | 9f38c77fe42fce8d43d1c7c4d16928f06b05eb47 | 2022-07-19T00:52:23.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:consumer-finance-complaints",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Kayvane | null | Kayvane/distilroberta-base-wandb-week-3-complaints-classifier-1024 | 9 | null | transformers | 12,807 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilroberta-base-wandb-week-3-complaints-classifier-1024
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: consumer-finance-complaints
type: consumer-finance-complaints
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8279904184292339
- name: F1
type: f1
value: 0.8236604095677945
- name: Recall
type: recall
value: 0.8279904184292339
- name: Precision
type: precision
value: 0.8235526237070518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wandb-week-3-complaints-classifier-1024
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5351
- Accuracy: 0.8280
- F1: 0.8237
- Recall: 0.8280
- Precision: 0.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.027176214786854e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1024
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7756 | 0.61 | 1500 | 0.7411 | 0.7647 | 0.7375 | 0.7647 | 0.7606 |
| 0.5804 | 1.22 | 3000 | 0.6140 | 0.8088 | 0.8052 | 0.8088 | 0.8077 |
| 0.5008 | 1.83 | 4500 | 0.5351 | 0.8280 | 0.8237 | 0.8280 | 0.8236 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nvidia/stt_ca_conformer_ctc_large | 4d978f8ad59a13f4b686eb001300577f3db7cf07 | 2022-07-22T18:34:53.000Z | [
"nemo",
"ca",
"dataset:mozilla-foundation/common_voice_9_0",
"arxiv:2005.08100",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"Riva",
"license:cc-by-4.0",
"model-index"
]
| automatic-speech-recognition | false | nvidia | null | nvidia/stt_ca_conformer_ctc_large | 9 | 1 | nemo | 12,808 | ---
language:
- ca
library_name: nemo
datasets:
- mozilla-foundation/common_voice_9_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
- Riva
license: cc-by-4.0
model-index:
- name: stt_ca_conformer_ctc_large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 9.0
type: mozilla-foundation/common_voice_9_0
config: ca
split: test
args:
language: ca
metrics:
- name: Test WER
type: wer
value: 4.27
---
# NVIDIA Conformer-CTC Large (Catalan)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model transcribes speech into lowercase Catalan alphabet including spaces, dashes and apostrophes, and is trained on around 1023 hours of Catalan speech data.
It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_ca_conformer_ctc_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_ca_conformer_ctc_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16 kHz mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-CTC model is a non-autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on the detail of this model here: [Conformer-CTC Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
The vocabulary we use contains 44 characters:
```python
[' ', "'", '-', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '·', 'à', 'á', 'ç', 'è', 'é', 'í', 'ï', 'ñ', 'ò', 'ó', 'ú', 'ü', 'ı', '–', '—']
```
Full config can be found inside the .nemo files.
The checkpoint of the language model used as the neural rescorer can be found [here](https://ngc.nvidia.com/catalog/models/nvidia:nemo:asrlm_en_transformer_large_ls). You may find more info on how to train and use language models for ASR models here: [ASR Language Modeling](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html)
### Datasets
All the models in this collection are trained on MCV-9.0 Catalan dataset, which contains around 1203 hours training, 28 hours of development and 27 hours of testing speech audios.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | Dev WER| Test WER| Train Dataset |
|---------|-----------------------|-----------------|-----|------|-----------------|
| 1.11.0 | SentencePiece Unigram | 128 |4.70 | 4.27 | MCV-9.0 Train set |
You may use language models (LMs) and beam search to improve the accuracy of the models, as reported in the follwoing table.
| Language Model | Test WER | Test WER w/ Oracle LM | Train Dataset | Settings |
|----------------|----------|-----------------------|------------------|-------------------------------------------------------|
| N-gram LM | 3.77 | 1.54 |MCV-9.0 Train set |N=6, beam_width=128, ngram_alpha=1.5, ngram_beta=2.0 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
- [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
WYHu/cve2cpe_gpt2 | dc723ece20b8a3800ac25749727fc75846841d24 | 2022-07-19T09:14:41.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | WYHu | null | WYHu/cve2cpe_gpt2 | 9 | null | transformers | 12,809 | Entry not found |
kabelomalapane/Nso-En_update | 9ef6fc7b31d78b8da882f0561c246e5fa8bfc136 | 2022-07-19T11:40:40.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| translation | false | kabelomalapane | null | kabelomalapane/Nso-En_update | 9 | null | transformers | 12,810 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Nso-En_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nso-En_update
This model is a fine-tuned version of [kabelomalapane/En-Nso](https://huggingface.co/kabelomalapane/En-Nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9219
- Bleu: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| No log | 1.0 | 108 | 2.0785 | 0.0 |
| No log | 2.0 | 216 | 1.9015 | 0.0 |
| No log | 3.0 | 324 | 1.8730 | 0.0 |
| No log | 4.0 | 432 | 1.8626 | 0.0 |
| 2.1461 | 5.0 | 540 | 1.8743 | 0.0 |
| 2.1461 | 6.0 | 648 | 1.8903 | 0.0 |
| 2.1461 | 7.0 | 756 | 1.9018 | 0.0 |
| 2.1461 | 8.0 | 864 | 1.9236 | 0.0 |
| 2.1461 | 9.0 | 972 | 1.9210 | 0.0 |
| 1.2781 | 10.0 | 1080 | 1.9219 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
naver-clova-ix/donut-proto | 15c4fac7b73aa06261d068ed5ec17b19b147f5bb | 2022-07-19T13:53:17.000Z | [
"pytorch",
"donut",
"transformers",
"license:mit"
]
| null | false | naver-clova-ix | null | naver-clova-ix/donut-proto | 9 | null | transformers | 12,811 | ---
license: mit
---
|
naver-clova-ix/donut-base-finetuned-cord-v1-2560 | 4518def4e1d14f650f8df5dedaaa3e166e7c2c3e | 2022-07-20T06:09:40.000Z | [
"pytorch",
"donut",
"transformers",
"license:mit"
]
| null | false | naver-clova-ix | null | naver-clova-ix/donut-base-finetuned-cord-v1-2560 | 9 | null | transformers | 12,812 | ---
license: mit
---
|
James-kc-min/L_Roberta3 | c91574ea1fed70cf16ba85434828d50ec57b12df | 2022-07-20T09:08:31.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | James-kc-min | null | James-kc-min/L_Roberta3 | 9 | null | transformers | 12,813 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: L_Roberta3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# L_Roberta3
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.9555
- F1: 0.9555
- Precision: 0.9555
- Recall: 0.9555
- C Report: precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
- C Matrix: None
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | C Report | C Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
| 0.2674 | 1.0 | 329 | 0.2436 | 0.9389 | 0.9389 | 0.9389 | 0.9389 | precision recall f1-score support
0 0.94 0.95 0.95 876
1 0.94 0.92 0.93 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.1377 | 2.0 | 658 | 0.1506 | 0.9408 | 0.9408 | 0.9408 | 0.9408 | precision recall f1-score support
0 0.97 0.92 0.95 876
1 0.91 0.96 0.94 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.0898 | 3.0 | 987 | 0.1491 | 0.9548 | 0.9548 | 0.9548 | 0.9548 | precision recall f1-score support
0 0.96 0.96 0.96 876
1 0.95 0.95 0.95 696
accuracy 0.95 1572
macro avg 0.95 0.95 0.95 1572
weighted avg 0.95 0.95 0.95 1572
| None |
| 0.0543 | 4.0 | 1316 | 0.1831 | 0.9561 | 0.9561 | 0.9561 | 0.9561 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.96 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
| 0.0394 | 5.0 | 1645 | 0.2095 | 0.9555 | 0.9555 | 0.9555 | 0.9555 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CennetOguz/gpt2-kit-cls | 1f01c37205709b5affa786fdbeeb5ea75b861549 | 2022-07-20T11:03:45.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | CennetOguz | null | CennetOguz/gpt2-kit-cls | 9 | null | transformers | 12,814 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kit-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kit-cls
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 2.9911 |
| No log | 2.0 | 6 | 2.8329 |
| No log | 3.0 | 9 | 2.7569 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
oMateos2020/t5-small_adafactor | 9cd5c335d9415493eadd8c41e165c2a424506efc | 2022-07-23T18:20:11.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | oMateos2020 | null | oMateos2020/t5-small_adafactor | 9 | null | transformers | 12,815 | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small_adafactor
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 32.8631
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_adafactor
This model is a fine-tuned version of [oMateos2020/t5-small_adafactor](https://huggingface.co/oMateos2020/t5-small_adafactor) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1167
- Rouge1: 32.8631
- Rouge2: 11.658
- Rougel: 26.6192
- Rougelsum: 26.6224
- Gen Len: 18.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1315 | 0.02 | 200 | 2.1865 | 31.9486 | 10.9605 | 25.7418 | 25.7408 | 18.8466 |
| 2.1297 | 0.05 | 400 | 2.1965 | 31.9598 | 10.9463 | 25.784 | 25.7867 | 18.8525 |
| 2.1284 | 0.07 | 600 | 2.1981 | 32.231 | 11.1003 | 26.0155 | 26.0226 | 18.8466 |
| 2.1315 | 0.09 | 800 | 2.1873 | 31.9161 | 10.8642 | 25.7166 | 25.7273 | 18.8227 |
| 2.1212 | 0.12 | 1000 | 2.1892 | 32.4646 | 11.1852 | 26.2451 | 26.2439 | 18.8259 |
| 2.1028 | 0.14 | 1200 | 2.1978 | 32.2886 | 11.1346 | 26.0795 | 26.0827 | 18.7685 |
| 2.1221 | 0.16 | 1400 | 2.1936 | 32.2901 | 11.0821 | 25.9983 | 26.0024 | 18.7798 |
| 2.1168 | 0.19 | 1600 | 2.1922 | 32.1655 | 11.1451 | 25.986 | 25.9893 | 18.8232 |
| 2.1166 | 0.21 | 1800 | 2.1836 | 32.2611 | 11.174 | 26.0594 | 26.0688 | 18.7633 |
| 2.1053 | 0.24 | 2000 | 2.1929 | 32.3321 | 11.213 | 26.1859 | 26.1903 | 18.7758 |
| 2.1126 | 0.26 | 2200 | 2.1811 | 32.2078 | 11.1792 | 26.0776 | 26.0817 | 18.8197 |
| 2.1038 | 0.28 | 2400 | 2.1836 | 32.2799 | 11.2511 | 26.1191 | 26.1251 | 18.7884 |
| 2.1181 | 0.31 | 2600 | 2.1805 | 32.1197 | 11.1586 | 26.0441 | 26.0441 | 18.8045 |
| 2.1217 | 0.33 | 2800 | 2.1806 | 32.3051 | 11.2638 | 26.1319 | 26.1386 | 18.7886 |
| 2.116 | 0.35 | 3000 | 2.1741 | 32.2799 | 11.1887 | 26.1224 | 26.1363 | 18.7769 |
| 2.1118 | 0.38 | 3200 | 2.1767 | 32.387 | 11.2053 | 26.077 | 26.0845 | 18.8407 |
| 2.1164 | 0.4 | 3400 | 2.1743 | 32.5008 | 11.4021 | 26.3291 | 26.3297 | 18.7731 |
| 2.1068 | 0.42 | 3600 | 2.1673 | 32.2347 | 11.1676 | 26.0657 | 26.0662 | 18.817 |
| 2.1276 | 0.45 | 3800 | 2.1664 | 32.2434 | 11.2862 | 26.094 | 26.0994 | 18.7713 |
| 2.1313 | 0.47 | 4000 | 2.1636 | 32.694 | 11.3724 | 26.4071 | 26.4008 | 18.7709 |
| 2.1229 | 0.49 | 4200 | 2.1633 | 32.456 | 11.4057 | 26.2733 | 26.2689 | 18.7586 |
| 2.129 | 0.52 | 4400 | 2.1641 | 32.309 | 11.2133 | 26.1062 | 26.1121 | 18.7729 |
| 2.1425 | 0.54 | 4600 | 2.1577 | 32.5879 | 11.4001 | 26.3045 | 26.3078 | 18.8104 |
| 2.1536 | 0.56 | 4800 | 2.1507 | 32.5152 | 11.4035 | 26.3054 | 26.3116 | 18.7941 |
| 2.148 | 0.59 | 5000 | 2.1503 | 32.8088 | 11.5641 | 26.5346 | 26.5311 | 18.7602 |
| 2.1541 | 0.61 | 5200 | 2.1491 | 32.8185 | 11.5816 | 26.5261 | 26.527 | 18.7654 |
| 2.155 | 0.64 | 5400 | 2.1466 | 32.7229 | 11.5339 | 26.4363 | 26.442 | 18.8404 |
| 2.1579 | 0.66 | 5600 | 2.1435 | 32.884 | 11.6042 | 26.5862 | 26.5891 | 18.7713 |
| 2.1601 | 0.68 | 5800 | 2.1393 | 32.8027 | 11.5328 | 26.4521 | 26.4567 | 18.7904 |
| 2.1765 | 0.71 | 6000 | 2.1393 | 32.8059 | 11.5751 | 26.5499 | 26.5551 | 18.7768 |
| 2.2176 | 0.73 | 6200 | 2.1345 | 33.0734 | 11.8056 | 26.7546 | 26.7607 | 18.7756 |
| 2.2126 | 0.75 | 6400 | 2.1328 | 32.7478 | 11.5925 | 26.5333 | 26.5359 | 18.7819 |
| 2.1916 | 0.78 | 6600 | 2.1298 | 32.658 | 11.491 | 26.379 | 26.3869 | 18.8101 |
| 2.2162 | 0.8 | 6800 | 2.1297 | 32.7843 | 11.5629 | 26.4736 | 26.4728 | 18.8187 |
| 2.2358 | 0.82 | 7000 | 2.1287 | 32.9181 | 11.6378 | 26.5966 | 26.5987 | 18.8039 |
| 2.2371 | 0.85 | 7200 | 2.1265 | 32.8413 | 11.674 | 26.5905 | 26.5831 | 18.7962 |
| 2.256 | 0.87 | 7400 | 2.1245 | 32.7412 | 11.5627 | 26.4976 | 26.503 | 18.7728 |
| 2.2566 | 0.89 | 7600 | 2.1220 | 32.8165 | 11.6069 | 26.5301 | 26.5295 | 18.7871 |
| 2.2954 | 0.92 | 7800 | 2.1197 | 32.7399 | 11.5417 | 26.4914 | 26.4938 | 18.7752 |
| 2.2766 | 0.94 | 8000 | 2.1187 | 32.853 | 11.6411 | 26.5909 | 26.5938 | 18.7852 |
| 2.3273 | 0.96 | 8200 | 2.1169 | 32.9376 | 11.709 | 26.6665 | 26.6672 | 18.7734 |
| 2.3182 | 0.99 | 8400 | 2.1167 | 32.8631 | 11.658 | 26.6192 | 26.6224 | 18.7663 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anneke/finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples | 3216215f35af559af2f29b13101041734094e872 | 2022-07-20T12:35:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anneke | null | anneke/finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples | 9 | null | transformers | 12,816 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1289
- Accuracy: 0.977
- F1: 0.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
caracena/mdeberta-clinical-base-es | ac8107b0b145e642b5fa58f1612630ca5c329de4 | 2022-07-20T14:49:21.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | caracena | null | caracena/mdeberta-clinical-base-es | 9 | null | transformers | 12,817 | Entry not found |
johnheo1128/distilbert-base-uncased-finetuned-cola | 88d4f70ca12f965efb77b352d94df36661080874 | 2022-07-20T18:21:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | johnheo1128 | null | johnheo1128/distilbert-base-uncased-finetuned-cola | 9 | null | transformers | 12,818 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5477951635989807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
- Matthews Correlation: 0.5478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.5270 | 0.4182 |
| 0.3451 | 2.0 | 1070 | 0.5017 | 0.4810 |
| 0.2309 | 3.0 | 1605 | 0.5983 | 0.5314 |
| 0.179 | 4.0 | 2140 | 0.7488 | 0.5291 |
| 0.1328 | 5.0 | 2675 | 0.8081 | 0.5478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Erolgo/Insectodoptera | 1e93e82506388d16c54f993c565d0c5bd00feb68 | 2022-07-26T00:11:34.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers"
]
| image-classification | false | Erolgo | null | Erolgo/Insectodoptera | 9 | null | transformers | 12,819 | DEVELOPMENT OF AN AI TAXONOMIST ASSISTANT
1. Develop and provide an at-microscope data-entry assistant tool. Sample metadata entry fields are provided above a live down-scope camera view, with a list of taxon buttons to select from (or add to). The live camera view is captured at moderate resolution from the moment of sample presentation to the moment the qualified taxonomist identifies the specimen (as indicated by a button-click). The button-click labels / tags the video footage with the taxon.
2. Each video clip (set of still frames) could be summarised to a set of principal component frames characterising the full set of frames.
3. Configure an image classifier such as an ensemble of vision transformers like Kyathanahally and Hardeman from EAWAG, train it on the summarising video frames from step 2. Actually, even the full set of frames (that contain the specimen) could be useful to introduce visual noise and make the classifier more robust.
4. Integrate the classifier into the tool described in 1, to offer the additional feature of: the list of previous taxa being sorted by likelihood as determined by AI prediction.
5. Continued use of the tool by qualified taxonomists would provide an opportunity for ongoing training image collection, particularly of taxa that are poorly predicted. The model could be further fine tuned with these additional training images.
6. When classification performance is comparable with qualified human taxonomists, release the data collection tool to users of taxonomy services in the market, with taxonomic classification as a web service.
7. Ultimately, and using the images collected at the moment the qualified human taxonomist clicked an IDing button, the tool might illustrate the orientation / presentation of the specimen required to resolve a classification.
https://forum.inaturalist.org/t/preferred-ways-of-batch-downloading-a-subset-of-the-inaturalist-data/18342/7
o accomplish the first part, you can either use the observation CSV export 15 or the get observations endpoint in the iNaturalist API 10. if going with the CSV approach, you’ll be limited to sets of 200,000 observations, and when you choose to get the image_url field, it will give you the URL for only the first photo for each observation. If going with the API approach, you will be limited to 10,000 observations per set of parameters, but you will be able to get all the photo URLs associated with each observation. you can work around the 200k and 10k observation limits by specifying slightly different sets of parameters for each set (easiest to do using a date range or id range).
to go with the CSV approach, you’ll have to go to the export page, and then put in the parameters you want (ex. has%5B%5D=photos&quality_grade=any&identifications=any&taxon_id=47114&photo_license=CC0%2CCC-BY&verifiable=true ) in the gray box in section 1.
to go with the API approach, you’ll just make API requests using your favorite tool / language (ex. https://api.inaturalist.org/v1/observations?sound_license=cc-by%2Ccc-by-nc&taxon_id=47114&order=desc&order_by=created_at 17).
there are many methods to accomplish the second part. i’ve described how to accomplish this using Windows batch files + curl 9 (along with notes about image sizes / names, download limits, etc.), but something similar could also be done in R 6 or whatever your favorite language is.
|
trevorj/BART_reddit | 25282f839c976cf937558516fc5ed6bff7006b99 | 2022-07-24T01:43:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | trevorj | null | trevorj/BART_reddit | 9 | null | transformers | 12,820 | Entry not found |
gciaffoni/wav2vec2-large-xls-r-300m-it-colab7 | 44cd164f4db77994446212ab122cd9f4848c4b99 | 2022-07-23T18:10:04.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | gciaffoni | null | gciaffoni/wav2vec2-large-xls-r-300m-it-colab7 | 9 | null | transformers | 12,821 | Entry not found |
steven123/Check_Gum_Teeth | 28d9b6aa236f089e12658637eb8a3a20e80baa64 | 2022-07-23T14:50:43.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | steven123 | null | steven123/Check_Gum_Teeth | 9 | null | transformers | 12,822 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_Gum_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# Check_Gum_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bad_Gum

#### Good_Gum
 |
erikanesse/great-books-bot | 6758d5344827e9271137b1312fd2235fba37a176 | 2022-07-23T18:55:23.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | erikanesse | null | erikanesse/great-books-bot | 9 | null | transformers | 12,823 | ---
tags:
- generated_from_trainer
model-index:
- name: great-books-bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# great-books-bot
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 300
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
swtx/Erlangshen-Deberta-97M-Chinese | 364f7efcc78984c99127baafabc819987a4cbe44 | 2022-07-25T06:25:48.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"zh",
"transformers",
"bert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | swtx | null | swtx/Erlangshen-Deberta-97M-Chinese | 9 | null | transformers | 12,824 | ---
language:
- zh
license: apache-2.0
tags:
- bert
inference: true
widget:
- text: "生活的真谛是[MASK]。"
---
# Erlangshen-Deberta-97M-Chinese,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
The 97 million parameter deberta-V2 base model, using 180G Chinese data, 24 A100(40G) training for 7 days,which is a encoder-only transformer structure. Consumed totally 1B samples.
## Task Description
Erlangshen-Deberta-97M-Chinese is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248)
## Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
import torch
tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Deberta-97M-Chinese', use_fast=false)
model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-Deberta-97M-Chinese')
text = '生活的真谛是[MASK]。'
fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7)
print(fillmask_pipe(text, top_k=10))
```
## Finetune
We present the dev results on some tasks.
| Model | OCNLI | CMNLI |
| ---------------------------------- | ----- | ------ |
| RoBERTa-base | 0.743 | 0.7973 |
| **Erlangshen-Deberta-97M-Chinese** | 0.752 | 0.807 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
UCSYNLP/MyanBERTa | 9893af94fa060d5135e85d5cd876693d71da8732 | 2022-07-26T04:02:59.000Z | [
"pytorch",
"roberta",
"fill-mask",
"my",
"dataset:MyCorpus",
"dataset:publicly available blogs and websites",
"transformers",
"MyanBERTa",
"Myanmar",
"BERT",
"RoBERTa",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | UCSYNLP | null | UCSYNLP/MyanBERTa | 9 | null | transformers | 12,825 | ---
language: my
tags:
- MyanBERTa
- Myanmar
- BERT
- RoBERTa
license: apache-2.0
datasets:
- MyCorpus
- publicly available blogs and websites
---
## Model description
This model is a BERT based Myanmar pre-trained language model.
MyanBERTa has been pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
```
Contributed by:
Aye Mya Hlaing
Win Pa Pa
```
|
vikaskapur/sentimental | 26350d3c1c7da3e013a27358048e05e95dfcea2c | 2022-07-29T01:02:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | vikaskapur | null | vikaskapur/sentimental | 9 | 1 | transformers | 12,826 | ---
license: apache-2.0
---
# Model Details
* The SENTIMENTAL classifier trained to predict the likelihood that a comment will be perceived as positive or negative.
* BERT based Text Classification.
# Intended Use
* Intended to be used for a wide range of use cases such as supporting human moderation and extracting polarity of review comments.
* Not intended for fully automated moderation.
* Not intended to make judgments about specific individuals.
# Factors
* Identity terms referencing frequently positive and negative emotions.
# Metrics
• Accuracy, which measures the percentage of True Positive and True Negative.
# Ethical Considerations
* TODO
# Quantitative Analyses
* TODO
# Training Data
* TODO
# Evaluation Data
* TODO
# Caveats and Recommendations
* TODO |
mughalk4/mBERT-German-Mono | e9a3dc7228aad6df167c5f986cf3eb7a6dc80680 | 2022-07-28T08:56:56.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mughalk4 | null | mughalk4/mBERT-German-Mono | 9 | null | transformers | 12,827 | Entry not found |
SharpAI/mal_tls_w8a8 | 992755f23a968d55b33b8907129603535f218eea | 2022-07-27T18:39:58.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers",
"generated_from_keras_callback",
"model-index"
]
| text-classification | false | SharpAI | null | SharpAI/mal_tls_w8a8 | 9 | null | transformers | 12,828 | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls_w8a8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls_w8a8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.10.3
|
sonoisa/t5-base-english-japanese | d2f9351ced10d462e5799b722e96888e650bf3f7 | 2022-07-28T11:33:27.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sonoisa | null | sonoisa/t5-base-english-japanese | 9 | null | transformers | 12,829 | Entry not found |
yanaiela/roberta-base-epoch_33 | ff6515b5d39f79989953462647cd40a8b3311cdc | 2022-07-29T22:51:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_33",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_33 | 9 | null | transformers | 12,830 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_33
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 33
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_33.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Go2Heart/BERT_Mod_2 | 98a5820e5738981c272b60e9899aa68979c9bd4c | 2022-07-28T18:32:09.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Go2Heart | null | Go2Heart/BERT_Mod_2 | 9 | null | transformers | 12,831 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: BERT_Mod_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Mod_2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5659
- eval_accuracy: 0.9037
- eval_runtime: 0.3838
- eval_samples_per_second: 2271.724
- eval_steps_per_second: 143.285
- epoch: 0.01
- step: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yanaiela/roberta-base-epoch_59 | a2c3896b418a2e6bfd9361040714fb429b707433 | 2022-07-29T23:01:00.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_59",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_59 | 9 | null | transformers | 12,832 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_59
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 59
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_59.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_75 | 32aa6fded68fbb64dafe58be89c7f3b90724bf5b | 2022-07-29T23:07:09.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_75",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_75 | 9 | null | transformers | 12,833 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_75
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 75
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_75.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_76 | b995f7c1c0874554bc2a0ce1724e9a2ffca86fd6 | 2022-07-29T23:07:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_76",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_76 | 9 | null | transformers | 12,834 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_76
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 76
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_76.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_79 | 4989901e47c002691c5d384d61b01e5e9544f102 | 2022-07-29T23:08:37.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_79",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_79 | 9 | null | transformers | 12,835 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_79
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 79
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_79.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_80 | 216ad5fb17b9801075306679204f82609e82a06a | 2022-07-29T23:08:59.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_80",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_80 | 9 | null | transformers | 12,836 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_80
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 80
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_80.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
ARTeLab/mbart-summarization-ilpost | 2ce77c4c97e4f03389178a92ca4507ad789118f3 | 2022-05-03T06:07:06.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"it",
"dataset:ARTeLab/ilpost",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
]
| summarization | false | ARTeLab | null | ARTeLab/mbart-summarization-ilpost | 8 | null | transformers | 12,837 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_mbart_ilpost
results: []
datasets:
- ARTeLab/ilpost
---
# mbart_summarization_ilpost
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743
- Gen Len: 39.8843
## Usage
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-ilpost")
model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-ilpost")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
# Citation
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh | 9d8fd0b4dd669e42ba21f3fb1579e1debfa856cd | 2022-02-21T20:21:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh | 8 | null | transformers | 12,838 | Entry not found |
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | d2e02f3763d37568022bfdae9b07e4e6b27e81fa | 2022-02-22T02:51:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ASCCCCCCCC | null | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | 8 | null | transformers | 12,839 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-chinese-finetuned-amazon_zh_20000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1683
- Accuracy: 0.5224
- F1: 0.5194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2051 | 1.0 | 2500 | 1.1717 | 0.506 | 0.4847 |
| 1.0035 | 2.0 | 5000 | 1.1683 | 0.5224 | 0.5194 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AndreLiu1225/t5-news-summarizer | 6e9436cab957ec608164ab041bcbaeed12dcb357 | 2021-10-26T02:45:12.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | AndreLiu1225 | null | AndreLiu1225/t5-news-summarizer | 8 | null | transformers | 12,840 | Entry not found |
Andrianos/bert-base-greek-punctuation-prediction-finetuned | 941b2ef1d8e00b6febce231802d3320350837c8d | 2021-09-29T13:13:25.000Z | [
"pytorch",
"bert",
"token-classification",
"el",
"transformers",
"Punctuation Prediction",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | Andrianos | null | Andrianos/bert-base-greek-punctuation-prediction-finetuned | 8 | null | transformers | 12,841 | |
AnonymousSub/EManuals_BERT_copy_wikiqa | 9052af2f6004d7d15445d8780140dac2a093c89e | 2022-01-23T04:47:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | AnonymousSub | null | AnonymousSub/EManuals_BERT_copy_wikiqa | 8 | null | transformers | 12,842 | Entry not found |
BigSalmon/PhraseBerta | b04f42c2b60c83c15f21d9c7d9736a8478223794 | 2021-07-08T00:38:06.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | BigSalmon | null | BigSalmon/PhraseBerta | 8 | null | transformers | 12,843 | Entry not found |
BigSalmon/Points2 | ebf94b5e2d8a2512c8fe8cc1d3ddb4c583b5e4b0 | 2022-02-07T00:27:54.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/Points2 | 8 | null | transformers | 12,844 | Converting Points or Headlines to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
```
Essay Intro (Sega Centers Classics): unyielding in its insistence on consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. this is a task that not even the most devoted fan could have foreseen.
***
Essay Intro (Blizzard Shows Video Games Are An Art): universally adored, video games have come to be revered not only as interactive diversions, but as artworks. a firm believer in this doctrine, blizzard actively works to further the craft of storytelling in their respective titles.
***
Essay Intro (What Happened To Linux): chancing upon a linux user is a rare occurrence in the present day. once a mainstay, the brand has come to only be seen in the hands of the most ardent of its followers.
``` |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | 02cca83f111f9c6c93001c229442c6d064a4723d | 2021-10-17T11:05:21.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | 8 | null | transformers | 12,845 | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID NADI Model
## Model description
**CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.9242768287658691},
{'label': 'Saudi_Arabia', 'score': 0.3400847613811493}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Cathy/reranking_model | ebb809efbe29b7e4fb9a181200e6062e465b576e | 2021-09-05T09:55:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Cathy | null | Cathy/reranking_model | 8 | null | transformers | 12,846 | Entry not found |
CenIA/bert-base-spanish-wwm-uncased-finetuned-pawsx | 2ae8fd64efb54e83a98d2978783e219cf626209b | 2022-01-04T13:16:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-uncased-finetuned-pawsx | 8 | null | transformers | 12,847 | Entry not found |
Chun/w-en2zh-hsk | 37e52cc5c85f82331497a8e23134f92e6b9427e1 | 2021-08-25T13:14:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Chun | null | Chun/w-en2zh-hsk | 8 | null | transformers | 12,848 | Entry not found |
CoffeeAddict93/gpt1-call-of-the-wild | 17df705a925e82fd1b57ba101f3a9ce65dbf403d | 2021-12-02T03:23:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt1-call-of-the-wild | 8 | null | transformers | 12,849 | Entry not found |
ComCom/gpt2-large | 744ecfda34efe6a469f7e0eadbb6edacf401864e | 2021-11-15T07:26:07.000Z | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | ComCom | null | ComCom/gpt2-large | 8 | null | transformers | 12,850 | 해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
|
Davlan/m2m100_418M-eng-yor-mt | f1511d6ef272d4c1e277f297df3b818687aa24d3 | 2022-03-29T09:21:53.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Davlan | null | Davlan/m2m100_418M-eng-yor-mt | 8 | null | transformers | 12,851 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# m2m100_418M-eng-yor-mt
## Model description
**m2m100_418M-eng-yor-mt** is a **machine translation** model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a **strong baseline** for automatically translating texts from English to Yorùbá.
Specifically, this model is a *facebook/m2m100_418M* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt).
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning m2m100_418M achieves **13.39 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
DeltaHub/lora_t5-base_mrpc | 8d87d8deef82d09e788e1f0867fbc27a8dbcb404 | 2022-02-14T06:32:18.000Z | [
"pytorch",
"transformers"
]
| null | false | DeltaHub | null | DeltaHub/lora_t5-base_mrpc | 8 | null | transformers | 12,852 | Need to work with OpenDelta
```
from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()
```
|
EMBEDDIA/est-roberta | 5b36f40096f68a25c6a47376b0715218687ab8f8 | 2021-11-29T12:17:46.000Z | [
"pytorch",
"camembert",
"fill-mask",
"et",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | EMBEDDIA | null | EMBEDDIA/est-roberta | 8 | 2 | transformers | 12,853 | ---
language:
- et
license: cc-by-sa-4.0
---
# Usage
Load in transformers library with:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
```
# Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
|
Elron/bleurt-tiny-128 | 1607b0b88c88390663970418ac61d4ff95ecf594 | 2021-10-04T13:27:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Elron | null | Elron/bleurt-tiny-128 | 8 | 1 | transformers | 12,854 | \n## BLEURT
Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by
Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research.
The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224).
## Usage Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512")
model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512")
model.eval()
references = ["hello world", "hello world"]
candidates = ["hi universe", "bye world"]
with torch.no_grad():
scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze()
print(scores) # tensor([-1.0563, -0.3004])
```
|
Finnish-NLP/convbert-base-generator-finnish | 4e05e88b590ad06f57c36df4410e5475387c30dc | 2022-06-13T16:15:42.000Z | [
"pytorch",
"convbert",
"fill-mask",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:2008.02496",
"transformers",
"finnish",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Finnish-NLP | null | Finnish-NLP/convbert-base-generator-finnish | 8 | null | transformers | 12,855 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- convbert
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
widget:
- text: "Moikka olen [MASK] kielimalli."
---
# ConvBERT for Finnish
Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
[this paper](https://arxiv.org/abs/2008.02496)
and first released at [this page](https://github.com/yitu-opensource/ConvBert).
**Note**: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish)
## Model description
Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN).
This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs.
Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning.
## Intended uses & limitations
You can use this generator model mainly just for the fill-mask task. For other tasks, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model instead.
### How to use
Here is how to use this model directly with a pipeline for fill-mask task:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Finnish-NLP/convbert-base-generator-finnish')
>>> unmasker("Moikka olen [MASK] kielimalli.")
[{'score': 0.08341152966022491,
'token': 4619,
'token_str': 'suomalainen',
'sequence': 'Moikka olen suomalainen kielimalli.'},
{'score': 0.02831297740340233,
'token': 25583,
'token_str': 'ranskalainen',
'sequence': 'Moikka olen ranskalainen kielimalli.'},
{'score': 0.027857203036546707,
'token': 37714,
'token_str': 'kiinalainen',
'sequence': 'Moikka olen kiinalainen kielimalli.'},
{'score': 0.027701903134584427,
'token': 21614,
'token_str': 'ruotsalainen',
'sequence': 'Moikka olen ruotsalainen kielimalli.'},
{'score': 0.026388710364699364,
'token': 591,
'token_str': 'hyvä',
'sequence': 'Moikka olen hyvä kielimalli.'}]
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish ConvBERT model was pretrained on the combination of five datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after.
Training code was from the official [ConvBERT repository](https://github.com/yitu-opensource/ConvBert) and also some instructions was used from [here](https://github.com/stefan-it/turkish-bert/blob/master/convbert/CHEATSHEET.md).
## Evaluation results
For evaluation results, check the [Finnish-NLP/convbert-base-finnish](https://huggingface.co/Finnish-NLP/convbert-base-finnish) model repository instead.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
FremyCompany/xls-r-nl-v1-cv8-lm | 2eea72fc09cc761ada387cfe7631738b21e14618 | 2022-03-23T18:34:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:multilingual_librispeech",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"vl",
"model-index"
]
| automatic-speech-recognition | false | FremyCompany | null | FremyCompany/xls-r-nl-v1-cv8-lm | 8 | 2 | transformers | 12,856 | ---
language:
- nl
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- robust-speech-event
- vl
datasets:
- mozilla-foundation/common_voice_8_0
- multilingual_librispeech
model-index:
- name: xls-r-nl-v1-cv8-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 6.69
- name: Test CER
type: cer
value: 1.97
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 20.79
- name: Test CER
type: cer
value: 10.72
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 19.71
---
# XLS-R-based CTC model with 5-gram language model from Common Voice
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset (see details below), on which a small 5-gram language model is added based on the Common Voice training corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.0669
- Cer: 0.0197
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the final result.
To improve accuracy, a beam decoder is used; the beams are scored based on 5-gram language model trained on the Common Voice 8 corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
0. The model was initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. The model was then trained `2000` iterations (batch size 32) on [the `dutch` configuration of the `multilingual_librispeech` dataset](https://huggingface.co/datasets/multilingual_librispeech/).
1. The model was then trained `2000` iterations (batch size 32) on [the `nl` configuration of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. The model was then trained `6000` iterations (batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. The model was then trained `6000` iterations (batch size 32) on [the `nl` configuation of the `common_voice_8_0` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Geotrend/distilbert-base-hi-cased | 99e7e25d2c7765161c05eb50fd297069c4672b73 | 2021-08-16T13:23:23.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"hi",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/distilbert-base-hi-cased | 8 | 1 | transformers | 12,857 | ---
language: hi
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-hi-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-hi-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-hi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Ghana-NLP/distilabena-base-v2-akuapem-twi-cased | 1924d0de61e611f7523a241716c047b65c11c4ef | 2020-10-22T06:08:50.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Ghana-NLP | null | Ghana-NLP/distilabena-base-v2-akuapem-twi-cased | 8 | null | transformers | 12,858 | Entry not found |
Giannipinelli/xlm-roberta-base-finetuned-marc-en | ee67fd1aa4d1414d1581a0289c477ddfdcc32ea3 | 2021-12-16T14:34:58.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Giannipinelli | null | Giannipinelli/xlm-roberta-base-finetuned-marc-en | 8 | null | transformers | 12,859 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9161
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1217 | 1.0 | 235 | 0.9396 | 0.4878 |
| 0.9574 | 2.0 | 470 | 0.9161 | 0.4634 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Harveenchadha/odia_large_wav2vec2 | f93005d0bee3f91bf0d1bd6fc45948f0e6c215f1 | 2022-03-23T18:34:27.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:Harveenchadha/indic-voice",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/odia_large_wav2vec2 | 8 | 2 | transformers | 12,860 | ---
license: apache-2.0
language:
- or
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- or
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 54.26
- name: Test CER
type: cer
value: 11.36
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: or
metrics:
- name: Test WER
type: wer
value: 53.58
- name: Test CER
type: cer
value: 11.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: or
metrics:
- name: Test WER
type: wer
value: 55.26
- name: Test CER
type: cer
value: 13.01
---
|
Helsinki-NLP/opus-mt-af-ru | e605c05ed4fc8945f81b83d65c5a8762fb7a2ed4 | 2021-01-18T07:46:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-ru | 8 | null | transformers | 12,861 | ---
language:
- af
- ru
tags:
- translation
license: apache-2.0
---
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bcl-de | 526eb0407d3feeccb096471b117e406d624aab42 | 2021-09-09T21:26:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bcl",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bcl-de | 8 | null | transformers | 12,862 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bcl-de
* source languages: bcl
* target languages: de
* OPUS readme: [bcl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.de | 30.3 | 0.510 |
|
Helsinki-NLP/opus-mt-bem-fr | 92b41995590a6bf5d7d242725d007912fa426e07 | 2021-09-09T21:27:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bem",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bem-fr | 8 | null | transformers | 12,863 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bem-fr
* source languages: bem
* target languages: fr
* OPUS readme: [bem-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fr | 25.0 | 0.417 |
|
Helsinki-NLP/opus-mt-ber-fr | 3b231ec4923f2342a1c3a382086846783a4c5f67 | 2021-09-09T21:27:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ber",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ber-fr | 8 | null | transformers | 12,864 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ber-fr
* source languages: ber
* target languages: fr
* OPUS readme: [ber-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.fr | 60.2 | 0.754 |
|
Helsinki-NLP/opus-mt-bg-tr | c44ddb80dfc0e353bb5574b1bc937aecdb983281 | 2021-01-18T07:51:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-tr | 8 | null | transformers | 12,865 | ---
language:
- bg
- tr
tags:
- translation
license: apache-2.0
---
### bul-tur
* source group: Bulgarian
* target group: Turkish
* OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.tur | 40.9 | 0.687 |
### System Info:
- hf_name: bul-tur
- source_languages: bul
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'tr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: tur
- short_pair: bg-tr
- chrF2_score: 0.687
- bleu: 40.9
- brevity_penalty: 0.946
- ref_len: 4948.0
- src_name: Bulgarian
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: tr
- prefer_old: False
- long_pair: bul-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bi-es | 40001c75cc73df30ac2ffe45d8c3f224ee17781b | 2021-09-09T21:27:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bi",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bi-es | 8 | null | transformers | 12,866 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bi-es
* source languages: bi
* target languages: es
* OPUS readme: [bi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.es | 21.1 | 0.388 |
|
Helsinki-NLP/opus-mt-cs-sv | ab967fe66d1c0d4f9403ae0b4c97c06ae8947b89 | 2021-09-09T21:29:33.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-sv | 8 | null | transformers | 12,867 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-sv
* source languages: cs
* target languages: sv
* OPUS readme: [cs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.cs.sv | 30.6 | 0.527 |
|
Helsinki-NLP/opus-mt-da-eo | c67fcfa49c349cdac1665b60f9f3823437b3da2b | 2021-01-18T07:56:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-eo | 8 | null | transformers | 12,868 | ---
language:
- da
- eo
tags:
- translation
license: apache-2.0
---
### dan-epo
* source group: Danish
* target group: Esperanto
* OPUS readme: [dan-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md)
* model: transformer-align
* source language(s): dan
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan.epo | 23.6 | 0.432 |
### System Info:
- hf_name: dan-epo
- source_languages: dan
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dan-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'eo']
- src_constituents: {'dan'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dan-epo/opus-2020-06-16.test.txt
- src_alpha3: dan
- tgt_alpha3: epo
- short_pair: da-eo
- chrF2_score: 0.43200000000000005
- bleu: 23.6
- brevity_penalty: 0.9420000000000001
- ref_len: 69856.0
- src_name: Danish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: da
- tgt_alpha2: eo
- prefer_old: False
- long_pair: dan-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-guw | 7a441fe0e9e7c4c430889b46b3b4541005c93bb1 | 2021-09-09T21:31:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"guw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-guw | 8 | null | transformers | 12,869 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-guw
* source languages: de
* target languages: guw
* OPUS readme: [de-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-guw/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-guw/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-guw/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.guw | 27.1 | 0.472 |
|
Helsinki-NLP/opus-mt-de-loz | efc9fe11206c281704056c9c3eda0b42f1cf43a0 | 2021-09-09T21:32:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"loz",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-loz | 8 | null | transformers | 12,870 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-loz
* source languages: de
* target languages: loz
* OPUS readme: [de-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-loz/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.loz | 27.7 | 0.480 |
|
Helsinki-NLP/opus-mt-de-mt | 0d71c2c09e3838d7276288da102f7e66d2d24032 | 2021-09-09T21:32:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"mt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-mt | 8 | null | transformers | 12,871 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-mt
* source languages: de
* target languages: mt
* OPUS readme: [de-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-mt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.mt | 25.0 | 0.436 |
|
Helsinki-NLP/opus-mt-de-nso | fbd9a40fa66f610b52855ad16263d4ea32c8bd7c | 2021-09-09T21:32:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"nso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-nso | 8 | null | transformers | 12,872 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-nso
* source languages: de
* target languages: nso
* OPUS readme: [de-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.nso | 31.1 | 0.519 |
|
Helsinki-NLP/opus-mt-efi-fi | 02877c2ef68a205047cde71b4b376ffcc565e4a7 | 2021-09-09T21:33:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"efi",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-efi-fi | 8 | null | transformers | 12,873 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-fi
* source languages: efi
* target languages: fi
* OPUS readme: [efi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.fi | 23.6 | 0.450 |
|
Helsinki-NLP/opus-mt-en-guw | 3024f1f51a9b2295d3dd4fc265dac44656f6c4df | 2021-09-09T21:35:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"guw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-guw | 8 | null | transformers | 12,874 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-guw
* source languages: en
* target languages: guw
* OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.guw | 45.7 | 0.634 |
|
Helsinki-NLP/opus-mt-en-kj | 366e494584ff69addf0d5cc91cff81da18ecd81f | 2021-09-09T21:36:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"kj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-kj | 8 | null | transformers | 12,875 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-kj
* source languages: en
* target languages: kj
* OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kj | 29.6 | 0.539 |
|
Helsinki-NLP/opus-mt-en-tut | fec86cc232cad7b969bb71a5929220c940272db9 | 2021-01-18T08:18:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tut",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tut | 8 | null | transformers | 12,876 | ---
language:
- en
- tut
tags:
- translation
license: apache-2.0
---
### eng-tut
* source group: English
* target group: Altaic languages
* OPUS readme: [eng-tut](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum mon nog ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn xal
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.4 | 0.438 |
| newstest2016-entr-engtur.eng.tur | 9.1 | 0.414 |
| newstest2017-entr-engtur.eng.tur | 9.5 | 0.414 |
| newstest2018-entr-engtur.eng.tur | 9.5 | 0.415 |
| Tatoeba-test.eng-aze.eng.aze | 27.2 | 0.580 |
| Tatoeba-test.eng-bak.eng.bak | 5.8 | 0.298 |
| Tatoeba-test.eng-chv.eng.chv | 4.6 | 0.301 |
| Tatoeba-test.eng-crh.eng.crh | 6.5 | 0.342 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.8 | 0.360 |
| Tatoeba-test.eng-kir.eng.kir | 24.6 | 0.499 |
| Tatoeba-test.eng-kjh.eng.kjh | 2.2 | 0.052 |
| Tatoeba-test.eng-kum.eng.kum | 8.0 | 0.229 |
| Tatoeba-test.eng-mon.eng.mon | 10.3 | 0.362 |
| Tatoeba-test.eng.multi | 19.5 | 0.451 |
| Tatoeba-test.eng-nog.eng.nog | 1.5 | 0.117 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.035 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.080 |
| Tatoeba-test.eng-tat.eng.tat | 10.8 | 0.320 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.6 | 0.323 |
| Tatoeba-test.eng-tur.eng.tur | 34.2 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 8.1 | 0.192 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.158 |
| Tatoeba-test.eng-uzb.eng.uzb | 4.2 | 0.298 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.061 |
### System Info:
- hf_name: eng-tut
- source_languages: eng
- target_languages: tut
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tut']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: tut
- short_pair: en-tut
- chrF2_score: 0.451
- bleu: 19.5
- brevity_penalty: 1.0
- ref_len: 57472.0
- src_name: English
- tgt_name: Altaic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: tut
- prefer_old: False
- long_pair: eng-tut
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-zls | 3eaefff94837f5c83794dc3def45b8ecb0c78dfe | 2021-01-18T08:19:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"hr",
"mk",
"bg",
"sl",
"zls",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-zls | 8 | null | transformers | 12,877 | ---
language:
- en
- hr
- mk
- bg
- sl
- zls
tags:
- translation
license: apache-2.0
---
### eng-zls
* source group: English
* target group: South Slavic languages
* OPUS readme: [eng-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md)
* model: transformer
* source language(s): eng
* target language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bul.eng.bul | 47.6 | 0.657 |
| Tatoeba-test.eng-hbs.eng.hbs | 40.7 | 0.619 |
| Tatoeba-test.eng-mkd.eng.mkd | 45.2 | 0.642 |
| Tatoeba-test.eng.multi | 42.7 | 0.622 |
| Tatoeba-test.eng-slv.eng.slv | 17.9 | 0.351 |
### System Info:
- hf_name: eng-zls
- source_languages: eng
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'eng'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zls
- short_pair: en-zls
- chrF2_score: 0.622
- bleu: 42.7
- brevity_penalty: 0.9690000000000001
- ref_len: 64788.0
- src_name: English
- tgt_name: South Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zls
- prefer_old: False
- long_pair: eng-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-bg | 252bb54a45efeb71472030aeeebd8a83b8e07a9c | 2021-01-18T08:19:56.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-bg | 8 | null | transformers | 12,878 | ---
language:
- eo
- bg
tags:
- translation
license: apache-2.0
---
### epo-bul
* source group: Esperanto
* target group: Bulgarian
* OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.bul | 19.0 | 0.395 |
### System Info:
- hf_name: epo-bul
- source_languages: epo
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'bg']
- src_constituents: {'epo'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: bul
- short_pair: eo-bg
- chrF2_score: 0.395
- bleu: 19.0
- brevity_penalty: 0.8909999999999999
- ref_len: 3961.0
- src_name: Esperanto
- tgt_name: Bulgarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: bg
- prefer_old: False
- long_pair: epo-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-hu | 6788309695f04b63b9490f117851157c11082d66 | 2021-01-18T08:20:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-hu | 8 | null | transformers | 12,879 | ---
language:
- eo
- hu
tags:
- translation
license: apache-2.0
---
### epo-hun
* source group: Esperanto
* target group: Hungarian
* OPUS readme: [epo-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.hun | 12.8 | 0.333 |
### System Info:
- hf_name: epo-hun
- source_languages: epo
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'hu']
- src_constituents: {'epo'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: hun
- short_pair: eo-hu
- chrF2_score: 0.33299999999999996
- bleu: 12.8
- brevity_penalty: 0.914
- ref_len: 65704.0
- src_name: Esperanto
- tgt_name: Hungarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: hu
- prefer_old: False
- long_pair: epo-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-eo-nl | 38d39d812b627c26cf888123bdc812a55ad6aa21 | 2021-01-18T08:20:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-eo-nl | 8 | null | transformers | 12,880 | ---
language:
- eo
- nl
tags:
- translation
license: apache-2.0
---
### epo-nld
* source group: Esperanto
* target group: Dutch
* OPUS readme: [epo-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.nld | 15.3 | 0.337 |
### System Info:
- hf_name: epo-nld
- source_languages: epo
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'nl']
- src_constituents: {'epo'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: nld
- short_pair: eo-nl
- chrF2_score: 0.337
- bleu: 15.3
- brevity_penalty: 0.8640000000000001
- ref_len: 78770.0
- src_name: Esperanto
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: nl
- prefer_old: False
- long_pair: epo-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-bi | c85dab4bea50d9ff6773b495310394f2a1e1f1c2 | 2021-09-09T21:41:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"bi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-bi | 8 | null | transformers | 12,881 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-bi
* source languages: es
* target languages: bi
* OPUS readme: [es-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bi | 28.0 | 0.473 |
|
Helsinki-NLP/opus-mt-es-ceb | 4918dbfee87fcb262e14948c9d571bcb0b1a808f | 2021-09-09T21:41:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ceb",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ceb | 8 | null | transformers | 12,882 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ceb
* source languages: es
* target languages: ceb
* OPUS readme: [es-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ceb | 33.9 | 0.564 |
|
Helsinki-NLP/opus-mt-es-ha | f14932e66e3feb63b1c89d39fefbbbd923dd499f | 2021-09-09T21:42:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ha",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ha | 8 | null | transformers | 12,883 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ha
* source languages: es
* target languages: ha
* OPUS readme: [es-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ha | 20.6 | 0.421 |
|
Helsinki-NLP/opus-mt-es-ig | 91c4365037baafd6cfe0859dc454913820e07338 | 2021-09-09T21:43:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ig",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ig | 8 | null | transformers | 12,884 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ig
* source languages: es
* target languages: ig
* OPUS readme: [es-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ig | 27.0 | 0.434 |
|
Helsinki-NLP/opus-mt-es-loz | c4b6426bef04b16018ccd43269a9f77426f49262 | 2021-09-09T21:43:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"loz",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-loz | 8 | null | transformers | 12,885 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-loz
* source languages: es
* target languages: loz
* OPUS readme: [es-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.loz | 28.6 | 0.493 |
|
Helsinki-NLP/opus-mt-es-ty | 687d74b5eeee5b1f41ecdf23df2f749b32ef5bd9 | 2021-09-09T21:45:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ty",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ty | 8 | null | transformers | 12,886 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ty
* source languages: es
* target languages: ty
* OPUS readme: [es-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ty | 37.3 | 0.544 |
|
Helsinki-NLP/opus-mt-es-ve | c4d885237a92e682f69945dea02cca43eae8de61 | 2021-09-09T21:45:30.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ve",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ve | 8 | null | transformers | 12,887 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ve
* source languages: es
* target languages: ve
* OPUS readme: [es-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ve/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ve/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ve/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ve/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ve | 21.7 | 0.440 |
|
Helsinki-NLP/opus-mt-es-yo | 0f55e0da64d6a1be87e3bb51ce722717df9032c9 | 2021-09-09T21:45:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"yo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-yo | 8 | null | transformers | 12,888 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-yo
* source languages: es
* target languages: yo
* OPUS readme: [es-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.yo | 22.3 | 0.387 |
|
Helsinki-NLP/opus-mt-fi-cs | 4e08d5cb173b64798537a5eea901f65fd1cc2311 | 2021-09-09T21:46:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"cs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-cs | 8 | null | transformers | 12,889 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-cs
* source languages: fi
* target languages: cs
* OPUS readme: [fi-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.cs | 25.0 | 0.470 |
|
Helsinki-NLP/opus-mt-fi-efi | 2864851fa8969c2f8d5c40e7d8eb0022fffc8986 | 2021-09-09T21:47:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"efi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-efi | 8 | null | transformers | 12,890 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-efi
* source languages: fi
* target languages: efi
* OPUS readme: [fi-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-efi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-efi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-efi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.efi | 26.6 | 0.482 |
|
Helsinki-NLP/opus-mt-fi-ha | 5355b9ba9d301882f3d71363566af054c11ca026 | 2021-09-09T21:48:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ha",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ha | 8 | null | transformers | 12,891 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ha
* source languages: fi
* target languages: ha
* OPUS readme: [fi-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ha/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ha/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ha/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.ha | 24.2 | 0.461 |
|
Helsinki-NLP/opus-mt-fi-ilo | bddd0bdefbc904280d4af63478cab2c7f98dc8c4 | 2021-09-09T21:48:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ilo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ilo | 8 | null | transformers | 12,892 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ilo
* source languages: fi
* target languages: ilo
* OPUS readme: [fi-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.ilo | 32.1 | 0.558 |
|
Helsinki-NLP/opus-mt-fi-lu | e8e8962d29890190775e07fe0910f46b069c8699 | 2021-09-09T21:49:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"lu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-lu | 8 | null | transformers | 12,893 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-lu
* source languages: fi
* target languages: lu
* OPUS readme: [fi-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.lu | 22.9 | 0.475 |
|
Helsinki-NLP/opus-mt-fi-mk | 1d414d7eb137cb3a9fd5dfac710ba8256b8f3256 | 2021-09-09T21:49:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"mk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-mk | 8 | null | transformers | 12,894 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-mk
* source languages: fi
* target languages: mk
* OPUS readme: [fi-mk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.mk | 28.9 | 0.501 |
|
Helsinki-NLP/opus-mt-fi-ro | d87f554c39c7c331debf34a58b7cdfb1b0c0f5ea | 2021-09-09T21:50:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ro | 8 | null | transformers | 12,895 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ro
* source languages: fi
* target languages: ro
* OPUS readme: [fi-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.ro | 27.0 | 0.490 |
|
Helsinki-NLP/opus-mt-fi-rw | 000378e42cb4f6d56c73fe1d0bc4adb06d6f4436 | 2021-09-09T21:50:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"rw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-rw | 8 | null | transformers | 12,896 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-rw
* source languages: fi
* target languages: rw
* OPUS readme: [fi-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.rw | 25.3 | 0.509 |
|
Helsinki-NLP/opus-mt-fi-tpi | cee023903a441b870e5bd13e9a3190ad448e6256 | 2021-09-09T21:51:36.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"tpi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-tpi | 8 | null | transformers | 12,897 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-tpi
* source languages: fi
* target languages: tpi
* OPUS readme: [fi-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.tpi | 30.5 | 0.504 |
|
Helsinki-NLP/opus-mt-fi-tr | ad13d47cb2b348ad05dfc36dce581796bf8bd415 | 2021-09-09T21:51:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-tr | 8 | null | transformers | 12,898 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-tr
* source languages: fi
* target languages: tr
* OPUS readme: [fi-tr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.tr | 31.6 | 0.619 |
|
Helsinki-NLP/opus-mt-fr-bzs | 3e9860fa3b1df5054d99a94d6c4683e0c4aee8c6 | 2021-09-09T21:53:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"bzs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-bzs | 8 | null | transformers | 12,899 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-bzs
* source languages: fr
* target languages: bzs
* OPUS readme: [fr-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bzs | 30.2 | 0.477 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.