modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
voidful/unit-mbart-large | b71e7bf16b3963b51ea23ec714d799e578d8f39f | 2022-06-20T10:41:38.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | voidful | null | voidful/unit-mbart-large | 14 | null | transformers | 10,000 | Entry not found |
AIRI-Institute/gena-lm-bert-base | ecb1769f4b0d0006a211854ebed67f16084fc188 | 2022-06-22T11:10:19.000Z | [
"pytorch",
"bert",
"transformers",
"dna",
"human_genome"
]
| null | false | AIRI-Institute | null | AIRI-Institute/gena-lm-bert-base | 14 | 11 | transformers | 10,001 | ---
tags:
- dna
- human_genome
---
# GENA-LM
GENA-LM is a transformer masked language model trained on human DNA sequence.
Differences between GENA-LM and DNABERT:
- BPE tokenization instead of k-mers;
- input sequence size is about 3000 nucleotides (512 BPE tokens) compared to 510 nucleotides of DNABERT
- pre-training on T2T vs. GRCh38.p13 human genome assembly.
Source code and data: https://github.com/AIRI-Institute/GENA_LM
## Examples
### How to load the model to fine-tune it on classification task
```python
from src.gena_lm.modeling_bert import BertForSequenceClassification
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('AIRI-Institute/gena-lm-bert-base')
model = BertForSequenceClassification.from_pretrained('AIRI-Institute/gena-lm-bert-base')
```
## Model description
GENA-LM model is trained in a masked language model (MLM) fashion, following the methods proposed in the BigBird paper by masking 85% of tokens. Model config for `gena-lm-bert-base` is similar to the bert-base:
- 512 Maximum sequence length
- 12 Layers, 12 Attention heads
- 768 Hidden size
- 32k Vocabulary size
We pre-trained `gena-lm-bert-base` using the latest T2T human genome assembly (https://www.ncbi.nlm.nih.gov/assembly/GCA_009914755.3/). Pre-training was performed for 500,000 iterations with the same parameters as in BigBird, except sequence length was equal to 512 tokens and we used pre-layer normalization in Transformer.
## Downstream tasks
Currently, gena-lm-bert-base model has been finetuned and tested on promoter prediction task. Its' performance is comparable to previous SOTA results. We plan to fine-tune and make available models for other downstream tasks in the near future.
### Fine-tuning GENA-LM on our data and scoring
After fine-tuning gena-lm-bert-base on promoter prediction dataset, following results were achieved:
| model | seq_len (bp) | F1 |
|--------------------------|--------------|-------|
| DeePromoter | 300 | 95.60 |
| GENA-LM bert-base (ours) | 2000 | 95.72 |
| BigBird | 16000 | 99.90 |
We can conclude that our model achieves comparable performance to the previously published results for promoter prediction task.
|
Jeevesh8/std_0pnt2_bert_ft_cola-44 | 5fdf77c5216fa561a7372ec1929511b8f8158621 | 2022-06-22T14:55:53.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-44 | 14 | null | transformers | 10,002 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-72 | da0fb226a007d1861184a90afc7b75f8a8228430 | 2022-06-21T13:28:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-72 | 14 | null | transformers | 10,003 | Entry not found |
Sayan01/tiny-bert-mnli-distilled | d5ee2a659cd28ab847b2ab1da788b863a3efdc5c | 2022-07-07T12:21:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Sayan01 | null | Sayan01/tiny-bert-mnli-distilled | 14 | null | transformers | 10,004 | Entry not found |
ArthurZ/opt-2.7b | bc875b9bf6fe726b9612592d10663b1de9c8cdd7 | 2022-06-21T15:25:46.000Z | [
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"transformers"
]
| text-generation | false | ArthurZ | null | ArthurZ/opt-2.7b | 14 | null | transformers | 10,005 | Entry not found |
ericntay/bio_bert_ft | 35d8d2e119bbdb67c0052c52dab089e4d18c7505 | 2022-06-22T15:24:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ericntay | null | ericntay/bio_bert_ft | 14 | null | transformers | 10,006 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bio_bert_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bio_bert_ft
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0747
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0879 | 1.0 | 170 | 0.0400 | 0.8312 |
| 0.0211 | 2.0 | 340 | 0.0454 | 0.8413 |
| 0.0105 | 3.0 | 510 | 0.0503 | 0.8603 |
| 0.0045 | 4.0 | 680 | 0.0497 | 0.8496 |
| 0.0028 | 5.0 | 850 | 0.0759 | 0.8387 |
| 0.0019 | 6.0 | 1020 | 0.0654 | 0.8598 |
| 0.0011 | 7.0 | 1190 | 0.0667 | 0.8654 |
| 0.0005 | 8.0 | 1360 | 0.0702 | 0.8621 |
| 0.0003 | 9.0 | 1530 | 0.0739 | 0.8596 |
| 0.0002 | 10.0 | 1700 | 0.0747 | 0.8621 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
truongxl/NER_VLSP2018 | 91f4a2a56e3448e5f4a3dedf67c4883fb9f4d892 | 2022-06-24T01:36:12.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | truongxl | null | truongxl/NER_VLSP2018 | 14 | null | transformers | 10,007 | Entry not found |
KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head | cbbba0e2270dac7a3f7b550fbef06a33000a1b1f | 2022-07-23T14:43:51.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head | 14 | null | transformers | 10,008 | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-base-japanese-wikipedia-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
lunde/gpt2-snapsvisor | d0e88c75b866fb661a1486e3b96d0e5efd0f8b16 | 2022-06-27T04:46:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"sv",
"transformers"
]
| text-generation | false | lunde | null | lunde/gpt2-snapsvisor | 14 | null | transformers | 10,009 | ---
language: sv
tags:
- gpt2
widget:
- text: "Nu tar vi e nubbe"
- text: "Spriten den"
---
# GPT2-Snapsvisor
This model is trained on scraped websites of Snapsvisor.
TODO: Fill in the rest |
Farshid/finetuning-finetuned-financial-phrasebank-75 | e9ffe3edbc0d4ddee14bbcca6e2ca4850a88840a | 2022-06-28T11:34:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:financial_phrasebank",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Farshid | null | Farshid/finetuning-finetuned-financial-phrasebank-75 | 14 | null | transformers | 10,010 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
- f1
model-index:
- name: finetuning-finetuned-financial-phrasebank-75
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_75agree
metrics:
- name: Accuracy
type: accuracy
value: 0.9217391304347826
- name: F1
type: f1
value: 0.9222750587883506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-finetuned-financial-phrasebank-75
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5900
- Accuracy: 0.9217
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6566 | 1.0 | 22 | 0.4940 | 0.8435 | 0.8448 |
| 0.4136 | 2.0 | 44 | 0.2829 | 0.9188 | 0.9195 |
| 0.2286 | 3.0 | 66 | 0.2404 | 0.9159 | 0.9159 |
| 0.1389 | 4.0 | 88 | 0.2527 | 0.9275 | 0.9280 |
| 0.0921 | 5.0 | 110 | 0.2555 | 0.9333 | 0.9336 |
| 0.0634 | 6.0 | 132 | 0.2987 | 0.9159 | 0.9168 |
| 0.0441 | 7.0 | 154 | 0.3032 | 0.9188 | 0.9198 |
| 0.0354 | 8.0 | 176 | 0.3167 | 0.9246 | 0.9253 |
| 0.0184 | 9.0 | 198 | 0.3296 | 0.9217 | 0.9220 |
| 0.0182 | 10.0 | 220 | 0.3284 | 0.9304 | 0.9305 |
| 0.0119 | 11.0 | 242 | 0.3513 | 0.9217 | 0.9223 |
| 0.008 | 12.0 | 264 | 0.4218 | 0.9217 | 0.9225 |
| 0.0068 | 13.0 | 286 | 0.4115 | 0.9188 | 0.9197 |
| 0.0077 | 14.0 | 308 | 0.4209 | 0.9159 | 0.9169 |
| 0.0037 | 15.0 | 330 | 0.4120 | 0.9217 | 0.9220 |
| 0.0026 | 16.0 | 352 | 0.4134 | 0.9159 | 0.9161 |
| 0.0026 | 17.0 | 374 | 0.4230 | 0.9217 | 0.9221 |
| 0.0025 | 18.0 | 396 | 0.4335 | 0.9275 | 0.9278 |
| 0.0016 | 19.0 | 418 | 0.4538 | 0.9188 | 0.9187 |
| 0.0022 | 20.0 | 440 | 0.4518 | 0.9188 | 0.9192 |
| 0.0011 | 21.0 | 462 | 0.4653 | 0.9159 | 0.9167 |
| 0.0011 | 22.0 | 484 | 0.4713 | 0.9159 | 0.9167 |
| 0.0008 | 23.0 | 506 | 0.4585 | 0.9217 | 0.9223 |
| 0.0007 | 24.0 | 528 | 0.4525 | 0.9275 | 0.9278 |
| 0.0007 | 25.0 | 550 | 0.4582 | 0.9304 | 0.9307 |
| 0.0009 | 26.0 | 572 | 0.4689 | 0.9275 | 0.9279 |
| 0.0006 | 27.0 | 594 | 0.4783 | 0.9275 | 0.9279 |
| 0.0019 | 28.0 | 616 | 0.4784 | 0.9275 | 0.9279 |
| 0.0013 | 29.0 | 638 | 0.4884 | 0.9246 | 0.9253 |
| 0.0006 | 30.0 | 660 | 0.5065 | 0.9217 | 0.9217 |
| 0.0036 | 31.0 | 682 | 0.4800 | 0.9246 | 0.9253 |
| 0.0007 | 32.0 | 704 | 0.4643 | 0.9304 | 0.9305 |
| 0.0009 | 33.0 | 726 | 0.4633 | 0.9275 | 0.9279 |
| 0.0004 | 34.0 | 748 | 0.4787 | 0.9275 | 0.9279 |
| 0.0004 | 35.0 | 770 | 0.4912 | 0.9217 | 0.9224 |
| 0.0004 | 36.0 | 792 | 0.4693 | 0.9246 | 0.9247 |
| 0.0004 | 37.0 | 814 | 0.4962 | 0.9246 | 0.9251 |
| 0.0004 | 38.0 | 836 | 0.5034 | 0.9246 | 0.9251 |
| 0.0003 | 39.0 | 858 | 0.5096 | 0.9188 | 0.9197 |
| 0.0003 | 40.0 | 880 | 0.5065 | 0.9246 | 0.9252 |
| 0.0003 | 41.0 | 902 | 0.4894 | 0.9246 | 0.9244 |
| 0.0005 | 42.0 | 924 | 0.5419 | 0.9159 | 0.9168 |
| 0.0016 | 43.0 | 946 | 0.5230 | 0.9217 | 0.9225 |
| 0.0003 | 44.0 | 968 | 0.5272 | 0.9159 | 0.9169 |
| 0.0003 | 45.0 | 990 | 0.4794 | 0.9275 | 0.9275 |
| 0.0003 | 46.0 | 1012 | 0.5131 | 0.9217 | 0.9223 |
| 0.0005 | 47.0 | 1034 | 0.5256 | 0.9246 | 0.9242 |
| 0.0004 | 48.0 | 1056 | 0.5571 | 0.9159 | 0.9168 |
| 0.0003 | 49.0 | 1078 | 0.5412 | 0.9246 | 0.9252 |
| 0.0005 | 50.0 | 1100 | 0.5465 | 0.9217 | 0.9225 |
| 0.0013 | 51.0 | 1122 | 0.5324 | 0.9333 | 0.9337 |
| 0.0002 | 52.0 | 1144 | 0.5284 | 0.9333 | 0.9337 |
| 0.0002 | 53.0 | 1166 | 0.5301 | 0.9304 | 0.9308 |
| 0.0002 | 54.0 | 1188 | 0.5317 | 0.9275 | 0.9280 |
| 0.0002 | 55.0 | 1210 | 0.5476 | 0.9246 | 0.9252 |
| 0.001 | 56.0 | 1232 | 0.5277 | 0.9333 | 0.9335 |
| 0.0002 | 57.0 | 1254 | 0.5387 | 0.9246 | 0.9251 |
| 0.0005 | 58.0 | 1276 | 0.5505 | 0.9246 | 0.9253 |
| 0.0006 | 59.0 | 1298 | 0.5400 | 0.9304 | 0.9306 |
| 0.0022 | 60.0 | 1320 | 0.5788 | 0.9159 | 0.9169 |
| 0.0002 | 61.0 | 1342 | 0.5504 | 0.9275 | 0.9277 |
| 0.0003 | 62.0 | 1364 | 0.5686 | 0.9275 | 0.9275 |
| 0.0002 | 63.0 | 1386 | 0.5653 | 0.9159 | 0.9165 |
| 0.0002 | 64.0 | 1408 | 0.5700 | 0.9188 | 0.9194 |
| 0.0002 | 65.0 | 1430 | 0.5705 | 0.9188 | 0.9194 |
| 0.0002 | 66.0 | 1452 | 0.5687 | 0.9159 | 0.9165 |
| 0.0003 | 67.0 | 1474 | 0.5971 | 0.9159 | 0.9168 |
| 0.0002 | 68.0 | 1496 | 0.5979 | 0.9188 | 0.9196 |
| 0.0009 | 69.0 | 1518 | 0.5905 | 0.9217 | 0.9223 |
| 0.0002 | 70.0 | 1540 | 0.5845 | 0.9188 | 0.9192 |
| 0.0003 | 71.0 | 1562 | 0.5942 | 0.9217 | 0.9223 |
| 0.0002 | 72.0 | 1584 | 0.5948 | 0.9217 | 0.9223 |
| 0.0002 | 73.0 | 1606 | 0.5943 | 0.9217 | 0.9223 |
| 0.0006 | 74.0 | 1628 | 0.5931 | 0.9217 | 0.9223 |
| 0.0002 | 75.0 | 1650 | 0.5927 | 0.9217 | 0.9223 |
| 0.0002 | 76.0 | 1672 | 0.5940 | 0.9217 | 0.9223 |
| 0.0002 | 77.0 | 1694 | 0.5937 | 0.9217 | 0.9223 |
| 0.0002 | 78.0 | 1716 | 0.5911 | 0.9217 | 0.9223 |
| 0.0006 | 79.0 | 1738 | 0.5900 | 0.9217 | 0.9223 |
| 0.0002 | 80.0 | 1760 | 0.5900 | 0.9217 | 0.9223 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
UT/BRTW_DEBIAS_SHORT | 7c9a809e49210ad096be8bb80f87e176288a5843 | 2022-06-27T08:02:08.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | UT | null | UT/BRTW_DEBIAS_SHORT | 14 | null | transformers | 10,011 | Entry not found |
canlinzhang/take-home-assign-tuned-from-dmis-lab-biobert-v1.1 | ef2e777d6bbb9008e9d2cbb836537412bcc5690c | 2022-06-28T04:21:20.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | canlinzhang | null | canlinzhang/take-home-assign-tuned-from-dmis-lab-biobert-v1.1 | 14 | null | transformers | 10,012 | Entry not found |
elliotthwang/mt5-small-finetuned-xlsum-chinese-tradition | 086c2ebe7d530ff051512dfb357ad10ac04896b2 | 2022-06-29T12:09:17.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | elliotthwang | null | elliotthwang/mt5-small-finetuned-xlsum-chinese-tradition | 14 | null | transformers | 10,013 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-xlsum-chinese-tradition
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: chinese_traditional
metrics:
- name: Rouge1
type: rouge
value: 0.2578
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-xlsum-chinese-tradition
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.2578
- Rouge2: 0.0176
- Rougel: 0.2519
- Rougelsum: 0.2542
- Gen Len: 6.094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 18687 | nan | 0.2578 | 0.0176 | 0.2519 | 0.2542 | 6.094 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS | d39725051d82b72a6fad260b31056a78f16c894e | 2022-06-29T16:01:02.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | ZeyadAhmed | null | ZeyadAhmed/AraElectra-Arabic-SQuADv2-CLS | 14 | null | transformers | 10,014 | Entry not found |
domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed | 2fc5157584ece9f5e066e46ad951262052fe610f | 2022-06-30T06:46:17.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | domenicrosati | null | domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed | 14 | null | transformers | 10,015 | ---
license: mit
tags:
- fill-mask
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large-dapt-scientific-papers-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-dapt-scientific-papers-pubmed
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4729
- Accuracy: 0.3510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 21600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 12.0315 | 0.02 | 500 | 11.6840 | 0.0 |
| 11.0675 | 0.05 | 1000 | 8.9471 | 0.0226 |
| 8.6646 | 0.07 | 1500 | 8.0093 | 0.0344 |
| 8.3625 | 0.09 | 2000 | 7.9624 | 0.0274 |
| 8.2467 | 0.12 | 2500 | 7.6599 | 0.0376 |
| 7.9714 | 0.14 | 3000 | 7.6716 | 0.0316 |
| 7.9852 | 0.16 | 3500 | 7.4535 | 0.0385 |
| 7.7502 | 0.19 | 4000 | 7.4293 | 0.0429 |
| 7.7016 | 0.21 | 4500 | 7.3576 | 0.0397 |
| 7.5789 | 0.23 | 5000 | 7.3124 | 0.0513 |
| 7.4141 | 0.25 | 5500 | 7.1353 | 0.0634 |
| 7.2365 | 0.28 | 6000 | 6.8600 | 0.0959 |
| 7.0725 | 0.3 | 6500 | 6.5743 | 0.1150 |
| 6.934 | 0.32 | 7000 | 6.3674 | 0.1415 |
| 6.7219 | 0.35 | 7500 | 6.3467 | 0.1581 |
| 6.5039 | 0.37 | 8000 | 6.1312 | 0.1815 |
| 6.3096 | 0.39 | 8500 | 5.9080 | 0.2134 |
| 6.1835 | 0.42 | 9000 | 5.8414 | 0.2137 |
| 6.0939 | 0.44 | 9500 | 5.5137 | 0.2553 |
| 6.0457 | 0.46 | 10000 | 5.5881 | 0.2545 |
| 5.8851 | 0.49 | 10500 | 5.5134 | 0.2497 |
| 5.7277 | 0.51 | 11000 | 5.3023 | 0.2699 |
| 5.6183 | 0.53 | 11500 | 5.0074 | 0.3019 |
| 5.4978 | 0.56 | 12000 | 5.1822 | 0.2814 |
| 5.5916 | 0.58 | 12500 | 5.1211 | 0.2808 |
| 5.4749 | 0.6 | 13000 | 4.9126 | 0.2972 |
| 5.3765 | 0.62 | 13500 | 5.0468 | 0.2899 |
| 5.3529 | 0.65 | 14000 | 4.8160 | 0.3037 |
| 5.2993 | 0.67 | 14500 | 4.8598 | 0.3141 |
| 5.2929 | 0.69 | 15000 | 4.9669 | 0.3052 |
| 5.2649 | 0.72 | 15500 | 4.7849 | 0.3270 |
| 5.162 | 0.74 | 16000 | 4.6819 | 0.3357 |
| 5.1639 | 0.76 | 16500 | 4.6056 | 0.3275 |
| 5.1245 | 0.79 | 17000 | 4.5473 | 0.3311 |
| 5.1596 | 0.81 | 17500 | 4.7008 | 0.3212 |
| 5.1346 | 0.83 | 18000 | 4.7932 | 0.3192 |
| 5.1174 | 0.86 | 18500 | 4.7624 | 0.3208 |
| 5.1152 | 0.88 | 19000 | 4.6388 | 0.3274 |
| 5.0852 | 0.9 | 19500 | 4.5247 | 0.3305 |
| 5.0564 | 0.93 | 20000 | 4.6982 | 0.3161 |
| 5.0179 | 0.95 | 20500 | 4.5363 | 0.3389 |
| 5.07 | 0.97 | 21000 | 4.6647 | 0.3307 |
| 5.0781 | 1.0 | 21500 | 4.4729 | 0.3510 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bazyl/gtsrb-model | 635fde5fe2fa293ecdcf8a58c824c52dd682980e | 2022-07-03T18:40:29.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:gtsrb",
"transformers",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| image-classification | false | bazyl | null | bazyl/gtsrb-model | 14 | null | transformers | 10,016 | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- gtsrb
metrics:
- accuracy
model-index:
- name: gtsrb-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: bazyl/GTSRB
type: gtsrb
args: gtsrb
metrics:
- name: Accuracy
type: accuracy
value: 0.9993199591975519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gtsrb-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the bazyl/GTSRB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0034
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2593 | 1.0 | 4166 | 0.1585 | 0.9697 |
| 0.2659 | 2.0 | 8332 | 0.0472 | 0.9900 |
| 0.2825 | 3.0 | 12498 | 0.0155 | 0.9971 |
| 0.0953 | 4.0 | 16664 | 0.0113 | 0.9983 |
| 0.1277 | 5.0 | 20830 | 0.0076 | 0.9985 |
| 0.0816 | 6.0 | 24996 | 0.0047 | 0.9988 |
| 0.0382 | 7.0 | 29162 | 0.0041 | 0.9990 |
| 0.0983 | 8.0 | 33328 | 0.0059 | 0.9990 |
| 0.1746 | 9.0 | 37494 | 0.0034 | 0.9993 |
| 0.1153 | 10.0 | 41660 | 0.0038 | 0.9990 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tner/bert-large-tweetner-2020 | 1d922be425ef89569b7d266e704bd6dfb7019cfd | 2022-07-08T13:28:22.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/bert-large-tweetner-2020 | 14 | null | transformers | 10,017 | Entry not found |
djagatiya/ner-distilbert-base-uncased-ontonotesv5-englishv4 | eecb058b98b9ac21c0772261eab0c9d4b3340ba8 | 2022-07-03T11:28:26.000Z | [
"pytorch",
"distilbert",
"token-classification",
"dataset:djagatiya/ner-ontonotes-v5-eng-v4",
"transformers",
"autotrain_compatible"
]
| token-classification | false | djagatiya | null | djagatiya/ner-distilbert-base-uncased-ontonotesv5-englishv4 | 14 | null | transformers | 10,018 | ---
tags:
- token-classification
datasets:
- djagatiya/ner-ontonotes-v5-eng-v4
---
# (NER) distilbert-base-uncased : conll2012_ontonotesv5-english-v4
This **distilbert-base-uncased** NER model was finetuned on **conll2012_ontonotesv5-english-v4** dataset. <br>
Check out [NER-System Repository](https://github.com/djagatiya/NER-System) for more information.
## Evaluation
- Precision: 84.60
- Recall: 86.47
- F1-Score: 85.53
> check out this [eval.log](eval.log) file for evaluation metrics and classification report.
```
precision recall f1-score support
CARDINAL 0.84 0.86 0.85 935
DATE 0.83 0.88 0.85 1602
EVENT 0.57 0.57 0.57 63
FAC 0.55 0.62 0.58 135
GPE 0.95 0.92 0.94 2240
LANGUAGE 0.82 0.64 0.72 22
LAW 0.50 0.50 0.50 40
LOC 0.55 0.72 0.62 179
MONEY 0.87 0.89 0.88 314
NORP 0.85 0.89 0.87 841
ORDINAL 0.81 0.88 0.84 195
ORG 0.81 0.83 0.82 1795
PERCENT 0.87 0.89 0.88 349
PERSON 0.93 0.93 0.93 1988
PRODUCT 0.55 0.55 0.55 76
QUANTITY 0.71 0.80 0.75 105
TIME 0.59 0.66 0.62 212
WORK_OF_ART 0.42 0.44 0.43 166
micro avg 0.85 0.86 0.86 11257
macro avg 0.72 0.75 0.73 11257
weighted avg 0.85 0.86 0.86 11257
```
|
tau/spider-nq-question-encoder | 79c5e770e1781d968ce6252b6f2d2c9a5b99811b | 2022-07-04T08:56:13.000Z | [
"pytorch",
"dpr",
"feature-extraction",
"transformers"
]
| feature-extraction | false | tau | null | tau/spider-nq-question-encoder | 14 | null | transformers | 10,019 | Entry not found |
twieland/MIX4_ja-en_helsinki | 6918ca885132c0dc9b5d9f5639ddaf30bc474511 | 2022-07-07T05:38:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | twieland | null | twieland/MIX4_ja-en_helsinki | 14 | null | transformers | 10,020 | Entry not found |
mesolitica/t5-tiny-finetuned-noisy-ms-en | bfc0dd089082ab7b2b34c29e395aeca512fdf57f | 2022-07-13T08:08:20.000Z | [
"pytorch",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mesolitica | null | mesolitica/t5-tiny-finetuned-noisy-ms-en | 14 | null | transformers | 10,021 | ---
tags:
- generated_from_keras_callback
model-index:
- name: t5-tiny-finetuned-noisy-ms-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-tiny-finetuned-noisy-ms-en
This model was finetuned from https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5, t5-tiny-social-media-2021-11-15.tar.gz, on https://huggingface.co/datasets/mesolitica/ms-en and https://huggingface.co/datasets/mesolitica/noisy-ms-en-augmentation
## Evaluation
### evaluation set
It achieves the following results on the evaluation set using SacreBLEU from [t5-tiny-noisy-ms-en-huggingface.ipynb](t5-tiny-noisy-ms-en-huggingface.ipynb):
```
{'name': 'BLEU',
'score': 65.9069151371865,
'_mean': -1.0,
'_ci': -1.0,
'_verbose': '83.0/69.3/60.7/54.1 (BP = 1.000 ratio = 1.001 hyp_len = 2003273 ref_len = 2001100)',
'bp': 1.0,
'counts': [1662910, 1327225, 1108852, 941870],
'totals': [2003273, 1915678, 1828247, 1741231],
'sys_len': 2003273,
'ref_len': 2001100,
'precisions': [83.00965470008332,
69.28225933585915,
60.651104582695886,
54.09219109928551],
'prec_str': '83.0/69.3/60.7/54.1',
'ratio': 1.0010859027534855}
```
**The test set is from a semisupervised model, this model might generate better results than the semisupervised model**.
### FLORES200
It achieved the following results on the [NLLB 200 test set](https://github.com/facebookresearch/flores/tree/main/flores200) using SacreBLEU from [sacrebleu-mesolitica-t5-tiny-finetuned-noisy-ms-en-flores200.ipynb](sacrebleu-mesolitica-t5-tiny-finetuned-noisy-ms-en-flores200.ipynb),
```
chrF2++ = 59.91
```
### Framework versions
- Transformers 4.19.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1 |
eplatas/distilroberta-base-finetuned-wikitext2 | aab33f5e5dd3b77218ce245a9f84b0d88d2ff6de | 2022-07-08T01:58:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | eplatas | null | eplatas/distilroberta-base-finetuned-wikitext2 | 14 | null | transformers | 10,022 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 87 | 1.9893 |
| No log | 2.0 | 174 | 1.9055 |
| No log | 3.0 | 261 | 1.8187 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KD02/distilbert-base-uncased-finetuned-squad | dac3a7a0323ebd274c5379c7c4c60e4d33ea505c | 2022-07-11T19:37:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | KD02 | null | KD02/distilbert-base-uncased-finetuned-squad | 14 | null | transformers | 10,023 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [KD02/distilbert-base-uncased-finetuned-squad](https://huggingface.co/KD02/distilbert-base-uncased-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ilmariky/bert-base-finnish-cased-squad2-fi | 4110820ede502853900ba854f4796227e3e08902 | 2022-07-29T07:54:28.000Z | [
"pytorch",
"bert",
"question-answering",
"fi",
"dataset:SQuAD_v2_fi + Finnish partition of TyDi-QA",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
]
| question-answering | false | ilmariky | null | ilmariky/bert-base-finnish-cased-squad2-fi | 14 | null | transformers | 10,024 | ---
language: fi
datasets:
- SQuAD_v2_fi + Finnish partition of TyDi-QA
license: gpl-3.0
---
# bert-base-finnish-cased-v1 for QA
This is the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model, fine-tuned using an automatically translated [Finnish version of the SQuAD2.0 dataset](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) in combination with the Finnish partition of the [TyDi-QA](https://github.com/google-research-datasets/tydiqa) dataset. It's been trained on question-answer pairs, **including unanswerable questions**, for the task of question answering.
When the model classifies the question as unanswerable, it outputs "[CLS]". There is also a QA model available that does not try to identify unanswerable questions, [
bert-base-finnish-cased-squad1-fi ](https://huggingface.co/ilmariky/bert-base-finnish-cased-squad1-fi).
## Overview
**Language model:** bert-base-finnish-cased-v1
**Language:** Finnish
**Downstream-task:** Extractive QA
**Training data:** [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA
**Eval data:** [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "ilmariky/bert-base-finnish-cased-squad2-fi"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Mikä tämä on?',
'context': 'Tämä on testi.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated with a slightly modified version of the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
{
"exact": 55.53157042633567,
"f1": 61.869335312255835,
"total": 7412,
"HasAns_exact": 51.26503525508088,
"HasAns_f1": 61.006950090095565,
"HasAns_total": 4822,
"NoAns_exact": 63.47490347490348,
"NoAns_f1": 63.47490347490348,
"NoAns_total": 2590
}
```
|
eus/testes | 5966666d6ccb8b18b23483bf44831847667de14e | 2022-07-13T23:09:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | eus | null | eus/testes | 14 | null | transformers | 10,025 | Entry not found |
fujiki/gpt2-xl-rand_ja-gpt2-medium | 4f8416c46db467749153bf74f21e842a620a67e7 | 2022-07-13T11:25:49.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"dataset:fujiki/pretrain-corpus-train_100k-valid_full",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | fujiki | null | fujiki/gpt2-xl-rand_ja-gpt2-medium | 14 | null | transformers | 10,026 | ---
tags:
- generated_from_trainer
datasets:
- fujiki/pretrain-corpus-train_100k-valid_full
model-index:
- name: gpt2-xl-rand_ja-gpt2-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-rand_ja-gpt2-medium
This model was trained from scratch on the fujiki/pretrain-corpus-train_100k-valid_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 2.3.2
- Tokenizers 0.11.6
|
jinwooChoi/SKKU_AP_SA_KOBERT | 83b26b88e71ca6538cb9c3a4117a84dc1f06f536 | 2022-07-22T09:10:42.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KOBERT | 14 | null | transformers | 10,027 | Entry not found |
codeparrot/codeparrot-small-complexity-prediction | 55f486f73defe4bb52bc7d2f47e61699b564ed12 | 2022-07-17T15:44:18.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | codeparrot | null | codeparrot/codeparrot-small-complexity-prediction | 14 | null | transformers | 10,028 | ---
license: apache-2.0
---
This is a fine-tuned version of [codeparrot-small-multi](https://huggingface.co/codeparrot/codeparrot-small-multi), a 110M multilingual model for code generation, on [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex), a dataset for complexity prediction of Java code. |
AbdalK25/DialoGPT-small-TheWiseBot | d90858cc3ee08d4812ed0f7a8d90bde2949786de | 2022-07-16T20:55:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | AbdalK25 | null | AbdalK25/DialoGPT-small-TheWiseBot | 14 | null | transformers | 10,029 | ---
tags:
- conversational
---
# The Wise DialoGPT Model |
domenicrosati/pegasus-xsum-finetuned-paws-parasci | 9e23040873cea86c96675f7ad4fd5bc55ee7524d | 2022-07-18T15:35:07.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"paraphrasing",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | domenicrosati | null | domenicrosati/pegasus-xsum-finetuned-paws-parasci | 14 | null | transformers | 10,030 | ---
tags:
- paraphrasing
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-xsum-finetuned-paws-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-finetuned-paws-parasci
This model is a fine-tuned version of [domenicrosati/pegasus-xsum-finetuned-paws](https://huggingface.co/domenicrosati/pegasus-xsum-finetuned-paws) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2256
- Rouge1: 61.8854
- Rouge2: 43.1061
- Rougel: 57.421
- Rougelsum: 57.4417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.05 | 1000 | 3.8024 | 49.471 | 24.8024 | 43.4857 | 43.5552 |
| No log | 0.09 | 2000 | 3.6533 | 49.1046 | 24.4038 | 43.0189 | 43.002 |
| No log | 0.14 | 3000 | 3.5867 | 49.5026 | 24.748 | 43.3059 | 43.2923 |
| No log | 0.19 | 4000 | 3.5613 | 49.4319 | 24.5444 | 43.2225 | 43.1965 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
WYHu/cve2cpe_xlnet | 9760da6c93051a4633725b0627375f8b9e4788f5 | 2022-07-19T02:53:35.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | WYHu | null | WYHu/cve2cpe_xlnet | 14 | null | transformers | 10,031 | Entry not found |
doya/klue-sentiment-everybodyscorpus-postive-boosting | fb4c2ec74554f553fa753d28def38c2d76a41c6e | 2022-07-19T08:39:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | doya | null | doya/klue-sentiment-everybodyscorpus-postive-boosting | 14 | null | transformers | 10,032 | Entry not found |
kalpeshk2011/rankgen-t5-xl-all | d8a7f62f6bb3240ad20c6e5b5d2f084856d8be0d | 2022-07-23T16:20:38.000Z | [
"pytorch",
"t5",
"en",
"dataset:Wikipedia",
"dataset:PG19",
"dataset:Project Gutenberg",
"dataset:C4",
"dataset:relic",
"dataset:ChapterBreak",
"dataset:HellaSwag",
"dataset:ROCStories",
"transformers",
"contrastive learning",
"ranking",
"decoding",
"metric learning",
"text generation",
"retrieval",
"license:apache-2.0"
]
| null | false | kalpeshk2011 | null | kalpeshk2011/rankgen-t5-xl-all | 14 | null | transformers | 10,033 | ---
language:
- en
thumbnail: "https://pbs.twimg.com/media/FThx_rEWAAEoujW?format=jpg&name=medium"
tags:
- t5
- contrastive learning
- ranking
- decoding
- metric learning
- pytorch
- text generation
- retrieval
license: "apache-2.0"
datasets:
- Wikipedia
- PG19
- Project Gutenberg
- C4
- relic
- ChapterBreak
- HellaSwag
- ROCStories
metrics:
- MAUVE
- human
---
## Main repository
https://github.com/martiansideofthemoon/rankgen
## What is RankGen?
RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html).
## Setup
**Requirements** (`pip` will install these dependencies for you)
Python 3.7+, `torch` (CUDA recommended), `transformers`
**Installation**
```
python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen
```
Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below,
```
gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4
```
Run the test script to make sure the RankGen checkpoint has loaded correctly,
```
python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all
### Expected output
0.0009239262409127233
0.0011521980725477804
```
## Using RankGen
Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation).
#### [SUGGESTED] Method-1: Loading the model with RankGenEncoder
```
from rankgen import RankGenEncoder, RankGenGenerator
rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-xl-all")
# Encoding vectors
prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")
# Generating text
# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")
inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]
# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])
```
#### Method-2: Loading the model with HuggingFace APIs
```
from transformers import T5Tokenizer, AutoModel
tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-xl")
model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-xl-all", trust_remote_code=True)
```
### RankGenEncoder Implementation
```
import tqdm
from transformers import T5Tokenizer, T5EncoderModel, AutoModel
class RankGenEncoder():
def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None):
assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"]
self.max_batch_size = max_batch_size
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if model_size is None:
if "t5-large" in model_path or "t5_large" in model_path:
self.model_size = "large"
elif "t5-xl" in model_path or "t5_xl" in model_path:
self.model_size = "xl"
else:
self.model_size = "base"
else:
self.model_size = model_size
self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir)
self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
self.model.to(self.device)
self.model.eval()
def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False):
tokenizer = self.tokenizer
max_batch_size = self.max_batch_size
if isinstance(inputs, str):
inputs = [inputs]
if vectors_type == 'prefix':
inputs = ['pre ' + input for input in inputs]
max_length = 512
else:
inputs = ['suffi ' + input for input in inputs]
max_length = 128
all_embeddings = []
all_input_ids = []
for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"):
tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True)
for k, v in tokenized_inputs.items():
tokenized_inputs[k] = v[:, :max_length]
tokenized_inputs = tokenized_inputs.to(self.device)
with torch.inference_mode():
batch_embeddings = self.model(**tokenized_inputs)
all_embeddings.append(batch_embeddings)
if return_input_ids:
all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist())
return {
"embeddings": torch.cat(all_embeddings, dim=0),
"input_ids": all_input_ids
}
``` |
jinwooChoi/SKKU_AP_SA_KES_trained1 | b391b29307b26ec0de938228b88820967620da5d | 2022-07-20T06:26:32.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KES_trained1 | 14 | null | transformers | 10,034 | Entry not found |
muibk/mirrorbert_mbert_sent_parallel_en_de_ru_10k_mean | c9ff526de597be65bb7c4717cdf52451e62ae0d6 | 2022-07-20T14:06:12.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | muibk | null | muibk/mirrorbert_mbert_sent_parallel_en_de_ru_10k_mean | 14 | null | transformers | 10,035 | Entry not found |
epiphacc/csabstract-classification | b0192d30aad8d3ba3df2b67e4955dc9e1afbce5d | 2022-07-21T18:03:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | epiphacc | null | epiphacc/csabstract-classification | 14 | null | transformers | 10,036 | Entry not found |
nbroad/longt5-base-global-mediasum | c71ccbde9a5b0ef34772e84238708e278f000d23 | 2022-07-28T20:37:27.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"summarization",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | nbroad | null | nbroad/longt5-base-global-mediasum | 14 | null | transformers | 10,037 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
- longt5
- summarization
model-index:
- name: longt5-mediasum
results:
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 22.7044
verified: true
- name: ROUGE-2
type: rouge
value: 5.616
verified: true
- name: ROUGE-L
type: rouge
value: 18.0111
verified: true
- name: ROUGE-LSUM
type: rouge
value: 18.1554
verified: true
- name: loss
type: loss
value: 2.1656227111816406
verified: true
- name: gen_len
type: gen_len
value: 18.3527
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 21.1522
verified: true
- name: ROUGE-2
type: rouge
value: 8.1315
verified: true
- name: ROUGE-L
type: rouge
value: 16.6625
verified: true
- name: ROUGE-LSUM
type: rouge
value: 19.3603
verified: true
- name: loss
type: loss
value: 1.899269700050354
verified: true
- name: gen_len
type: gen_len
value: 17.853
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5-mediasum
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.66 | 1.0 | 1667 | 2.0643 |
| 2.472 | 2.0 | 3334 | 2.0241 |
| 2.3574 | 3.0 | 5001 | 2.0129 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_SA_KEB | daf29357f7383dd3764ea40e33ad88c01ca3743d | 2022-07-22T07:24:19.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_SA_KEB | 14 | null | transformers | 10,038 | Entry not found |
ai4bharat/IndicBERTv2-alpha-POS-tagging | 70fd45e6c0ee9b8f4365a0d281ec3e9ad23afa48 | 2022-07-27T11:23:14.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ai4bharat | null | ai4bharat/IndicBERTv2-alpha-POS-tagging | 14 | null | transformers | 10,039 | # IndicXLMv2-alpha-POS-tagging
|
clevrly/xlnet-base-mnli-fer-finetuned | 70b185f62256b786a8a780c42a735bc9d85e2d36 | 2022-07-27T00:59:10.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | clevrly | null | clevrly/xlnet-base-mnli-fer-finetuned | 14 | null | transformers | 10,040 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlnet-base-mnli-fer-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-mnli-fer-finetuned
This model is a fine-tuned version of [clevrly/xlnet-base-mnli-finetuned](https://huggingface.co/clevrly/xlnet-base-mnli-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0152
- Accuracy: 0.7794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5828 | 1.0 | 2219 | 0.9689 | 0.7277 |
| 0.578 | 2.0 | 4438 | 1.1408 | 0.7310 |
| 0.5027 | 3.0 | 6657 | 0.9754 | 0.7742 |
| 0.4233 | 4.0 | 8876 | 1.0719 | 0.7751 |
| 0.3026 | 5.0 | 11095 | 1.0152 | 0.7794 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/vithederg | d3745c12b6f95af9d1ec3933b587ac96d256432d | 2022-07-26T06:11:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/vithederg | 14 | null | transformers | 10,041 | ---
language: en
thumbnail: http://www.huggingtweets.com/vithederg/1658815905698/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547564667320487937/0S_fp5iq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">vi✧ (#SaveWingsOfFire)</div>
<div style="text-align: center; font-size: 14px;">@vithederg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from vi✧ (#SaveWingsOfFire).
| Data | vi✧ (#SaveWingsOfFire) |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 2618 |
| Short tweets | 68 |
| Tweets kept | 531 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lq9tppb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vithederg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bwbzsrm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bwbzsrm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vithederg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dream | 13099e9bf10ab3f220de3cb4760abbc7aa659b6e | 2022-07-27T00:16:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/dream | 14 | null | transformers | 10,042 | ---
language: en
thumbnail: http://www.huggingtweets.com/dream/1658880860354/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1355511252177588225/INsKstf7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dream</div>
<div style="text-align: center; font-size: 14px;">@dream</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dream.
| Data | Dream |
| --- | --- |
| Tweets downloaded | 538 |
| Retweets | 13 |
| Short tweets | 133 |
| Tweets kept | 392 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3des4e9u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dream's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a9qjxt0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a9qjxt0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dream')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ICML2022/Tranception | 3bb199ad6722113742d595288c4d49b16ade8b13 | 2022-07-28T23:28:37.000Z | [
"pytorch",
"tranception",
"fill-mask",
"arxiv:2205.13760",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ICML2022 | null | ICML2022/Tranception | 14 | 5 | transformers | 10,043 | # Tranception model
This Hugging Face Hub repo contains the model checkpoint for the Tranception model as described in our paper ["Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval"](https://arxiv.org/abs/2205.13760). The official GitHub repository can be accessed [here](https://github.com/OATML-Markslab/Tranception). This project is a joint collaboration between the [Marks lab](https://www.deboramarkslab.com/) and the [OATML group](https://oatml.cs.ox.ac.uk/).
## Abstract
The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein sequences trained on multiple sequence alignments have been the most successful approaches so far to address these tasks. The performance of these methods is however contingent on the availability of sufficiently deep and diverse alignments for reliable training. Their potential scope is thus limited by the fact many protein families are hard, if not impossible, to align. Large language models trained on massive quantities of non-aligned protein sequences from diverse families address these problems and show potential to eventually bridge the performance gap. We introduce Tranception, a novel transformer architecture leveraging autoregressive predictions and retrieval of homologous sequences at inference to achieve state-of-the-art fitness prediction performance. Given its markedly higher performance on multiple mutants, robustness to shallow alignments and ability to score indels, our approach offers significant gain of scope over existing approaches. To enable more rigorous model testing across a broader range of protein families, we develop ProteinGym -- an extensive set of multiplexed assays of variant effects, substantially increasing both the number and diversity of assays compared to existing benchmarks.
## License
This project is available under the MIT license.
## Reference
If you use Tranception or other files provided through our GitHub repository, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
Pre-print: https://arxiv.org/abs/2205.13760
GitHub: https://github.com/OATML-Markslab/Tranception |
kdf/python-docstring-generation | e4808e7fe9ff5d5479bd71ebc95e36fc51127a4d | 2022-07-29T15:31:02.000Z | [
"pytorch",
"codegen",
"text-generation",
"transformers",
"license:apache-2.0"
]
| text-generation | false | kdf | null | kdf/python-docstring-generation | 14 | null | transformers | 10,044 | ---
license: apache-2.0
widget:
- text: "<|endoftext|>\ndef load_excel(path):\n return pd.read_excel(path)\n# docstring\n\"\"\""
---
## Basic info
model based [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono)
fine-tuned with data [codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean)
data filter by python
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_type = 'kdf/python-docstring-generation'
tokenizer = AutoTokenizer.from_pretrained(model_type)
model = AutoModelForCausalLM.from_pretrained(model_type)
inputs = tokenizer('''<|endoftext|>
def load_excel(path):
return pd.read_excel(path)
# docstring
"""''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
```
## Prompt
You could give model a style or a specific language, for example:
```python
inputs = tokenizer('''<|endoftext|>
def add(a, b):
return a + b
# docstring
"""
Calculate numbers add.
Args:
a: the first number to add
b: the second number to add
Return:
The result of a + b
"""
<|endoftext|>
def load_excel(path):
return pd.read_excel(path)
# docstring
"""''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
inputs = tokenizer('''<|endoftext|>
def add(a, b):
return a + b
# docstring
"""
计算数字相加
Args:
a: 第一个加数
b: 第二个加数
Return:
相加的结果
"""
<|endoftext|>
def load_excel(path):
return pd.read_excel(path)
# docstring
"""''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
``` |
kdf/javascript-docstring-generation | 0bee4471dc9eda593147aa35f159eb2eead7ec0c | 2022-07-29T15:32:50.000Z | [
"pytorch",
"codegen",
"text-generation",
"transformers",
"license:apache-2.0"
]
| text-generation | false | kdf | null | kdf/javascript-docstring-generation | 14 | null | transformers | 10,045 | ---
license: apache-2.0
widget:
- text: "<|endoftext|>\nfunction getDateAfterNDay(n){\n return moment().add(n, 'day')\n}\n// docstring\n/**"
---
## Basic info
model based [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono)
fine-tuned with data [codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean)
data filter by JavaScript and TypeScript
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_type = 'kdf/javascript-docstring-generation'
tokenizer = AutoTokenizer.from_pretrained(model_type)
model = AutoModelForCausalLM.from_pretrained(model_type)
inputs = tokenizer('''<|endoftext|>
function getDateAfterNDay(n){
return moment().add(n, 'day')
}
// docstring
/**''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
```
## Prompt
You could give model a style or a specific language, for example:
```python
inputs = tokenizer('''<|endoftext|>
function add(a, b){
return a + b;
}
// docstring
/**
* Calculate number add.
* @param a {number} the first number to add
* @param b {number} the second number to add
* @return the result of a + b
*/
<|endoftext|>
function getDateAfterNDay(n){
return moment().add(n, 'day')
}
// docstring
/**''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
inputs = tokenizer('''<|endoftext|>
function add(a, b){
return a + b;
}
// docstring
/**
* 计算数字相加
* @param a {number} 第一个加数
* @param b {number} 第二个加数
* @return 返回 a + b 的结果
*/
<|endoftext|>
function getDateAfterNDay(n){
return moment().add(n, 'day')
}
// docstring
/**''', return_tensors='pt')
doc_max_length = 128
generated_ids = model.generate(
**inputs,
max_length=inputs.input_ids.shape[1] + doc_max_length,
do_sample=False,
return_dict_in_generate=True,
num_return_sequences=1,
output_scores=True,
pad_token_id=50256,
eos_token_id=50256 # <|endoftext|>
)
ret = tokenizer.decode(generated_ids.sequences[0], skip_special_tokens=False)
print(ret)
``` |
platzi/platzi-distilroberta-base-mrpc-glue-omar-espejel | d7976029581a2186e6383928f927c9dc34dcd0a0 | 2022-07-29T21:57:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | platzi | null | platzi/platzi-distilroberta-base-mrpc-glue-omar-espejel | 14 | null | transformers | 10,046 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.","Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-omar-espejel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8861209964412811
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-omar-espejel
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6332
- Accuracy: 0.8431
- F1: 0.8861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5076 | 1.09 | 500 | 0.7464 | 0.8137 | 0.8671 |
| 0.3443 | 2.18 | 1000 | 0.6332 | 0.8431 | 0.8861 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Frikallo/DeepDunk | 61e039b570969231c7349dac1e6c5eb192758797 | 2022-07-30T01:14:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | Frikallo | null | Frikallo/DeepDunk | 14 | null | transformers | 10,047 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DeepDunk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepDunk
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 1360794382
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Alireza1044/albert-base-v2-wnli | 2fe5cc30189491cd586095a635f22cd2b723317f | 2021-07-28T07:30:04.000Z | [
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | Alireza1044 | null | Alireza1044/albert-base-v2-wnli | 13 | null | transformers | 10,048 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metric:
name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wnli
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6898
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
Ateeb/FullEmotionDetector | 03545aef305f7fcbaacf874ace6e0c8fd4be6e17 | 2021-03-22T19:28:37.000Z | [
"pytorch",
"funnel",
"text-classification",
"transformers"
]
| text-classification | false | Ateeb | null | Ateeb/FullEmotionDetector | 13 | null | transformers | 10,049 | Entry not found |
BearThreat/distilbert-base-uncased-finetuned-cola | caada7af9e82bfefb7d6b6626d755a342d550be6 | 2021-09-29T14:58:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | BearThreat | null | BearThreat/distilbert-base-uncased-finetuned-cola | 13 | null | transformers | 10,050 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.533214904586951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5774
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2347 | 1.0 | 535 | 0.5774 | 0.5332 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
BigSalmon/InfillFormalLincoln | 8c711d417b2212e2e7f0332d1c1a0a4bac60e152 | 2022-02-02T03:45:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/InfillFormalLincoln | 13 | null | transformers | 10,051 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
``` |
BigSalmon/ParaphraseParentheses2.0 | cd37944ff8fae6b1639d2e53f4b9049762ab80b4 | 2021-12-13T00:11:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/ParaphraseParentheses2.0 | 13 | null | transformers | 10,052 | This can be used to paraphrase. I recommend using the code I have attached below. You can generate it without using LogProbs, but you are likely to be best served by manually examining the most likely outputs.
If this interests you, check out https://huggingface.co/BigSalmon/MrLincoln12 or my other MrLincoln repos.
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/ParaphraseParentheses2.0")
```
Example Prompt:
```
the nba is [mask] [mask] viewership.
the nba is ( facing / witnessing / confronted with / suffering from / grappling with ) ( lost / tanking ) viewership...
ai is certain to [mask] the third industrial revolution.
ai is certain to ( breed / catalyze / inaugurate / catalyze / usher in / call forth / turn loose / lend its name to ) the third industrial revolution.
the modern-day knicks are a disgrace to [mask].
the modern-day knicks are a disgrace to the franchise's ( rich legacy / tradition of excellence / uniquely distinguished record ).
HuggingFace is [mask].
HuggingFace is ( an amazing company /
```
```
import torch
prompt = "Insert Your Prompt Here. It is Best To Have a Few Examples Before Like The Example Prompt Shows."
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(500)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
for i in range(500):
m = ([best_words[i]])
m = str(m)
m = m.replace("[' ", "").replace("']", "")
print(m)
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | 4ead6e9f26ffd1ef9c9ff7daff8dd48fede0ba81 | 2021-10-18T09:44:42.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | 13 | null | transformers | 10,053 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
---
# CAMeLBERT-Mix POS-MSA Model
## Model description
**CAMeLBERT-Mix POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999592, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997877, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.9998405, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9697179, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99967164, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99980617, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997973, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.99995637, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.9983974, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.9999469, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9993273, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Cameron/BERT-SBIC-offensive | a8080ee750bbe0a0246e44594afbe1e6c9f9aa11 | 2021-05-18T17:22:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Cameron | null | Cameron/BERT-SBIC-offensive | 13 | null | transformers | 10,054 | Entry not found |
CoffeeAddict93/gpt1-modest-proposal | 63b93d3ded02c3ee8a6dc45e1a67189ac05d824e | 2021-12-02T03:48:33.000Z | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers"
]
| text-generation | false | CoffeeAddict93 | null | CoffeeAddict93/gpt1-modest-proposal | 13 | null | transformers | 10,055 | Entry not found |
DHBaek/xlm-roberta-large-korquad-mask | fd2a17228c5b4ab1bf8c88115140a3e5d6ead783 | 2021-05-15T05:07:50.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | DHBaek | null | DHBaek/xlm-roberta-large-korquad-mask | 13 | null | transformers | 10,056 | Entry not found |
Deniskin/emailer_medium_300 | 3b50dc685262f04920ce7aa3a670e1a1fcb7a2b3 | 2021-06-12T14:29:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Deniskin | null | Deniskin/emailer_medium_300 | 13 | null | transformers | 10,057 | Entry not found |
Doogie/waynehills_sentimental_kor | 2247c1c5df961856a054d4f394b5211b422c53f6 | 2022-01-28T01:09:53.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | Doogie | null | Doogie/waynehills_sentimental_kor | 13 | null | transformers | 10,058 | Entry not found |
Emran/ClinicalBERT_ICD10_Categories | 24f75dbec9273d92058f6098b73be22c559c7ef1 | 2021-10-12T17:42:10.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | Emran | null | Emran/ClinicalBERT_ICD10_Categories | 13 | 1 | transformers | 10,059 | Entry not found |
EthanChen0418/intent_cls | cf085cfcd8d7f475287b0f4fa87387e13aeaaa81 | 2021-08-30T04:42:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | EthanChen0418 | null | EthanChen0418/intent_cls | 13 | null | transformers | 10,060 | Entry not found |
FabianGroeger/HotelBERT | 61f744730fe33d1c54b2c8d836766715db0da41f | 2021-11-18T05:56:08.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"de",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | FabianGroeger | null | FabianGroeger/HotelBERT | 13 | 1 | transformers | 10,061 | ---
language: de
widget:
- text: "Das <mask> hat sich toll um uns gekümmert."
---
# HotelBERT
This model was trained on reviews from a well known German hotel platform.
|
GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl | 2e5d388b8a477fed576d3013d2bc11459ca1f8cc | 2022-04-19T15:29:33.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/bioasq-1m-tsdae-msmarco-distilbert-gpl | 13 | null | sentence-transformers | 10,062 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Hamas/DialoGPT-large-jake2 | fb96a0fcc9da2cc38daf816b7e32a27c4fa325d0 | 2021-09-26T18:13:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Hamas | null | Hamas/DialoGPT-large-jake2 | 13 | null | transformers | 10,063 | ---
tags:
- conversational
---
# Jake DialoGPT-large-jake2
|
Harveenchadha/vakyansh-wav2vec2-dogri-doi-55 | 13dc75c4896c1189e1b02862dbf3fb4cb25a52f0 | 2021-12-17T17:48:06.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-dogri-doi-55 | 13 | null | transformers | 10,064 | Entry not found |
Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 | e8c7a0f23c42a690f059ee43e2c8957e520646eb | 2021-12-17T18:00:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 | 13 | null | transformers | 10,065 | Entry not found |
Hate-speech-CNERG/deoffxlmr-mono-tamil | 4b04b5606198915dabeacda90056d4d8f521a583 | 2021-09-25T13:59:19.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"ta",
"transformers",
"license:apache-2.0"
]
| text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/deoffxlmr-mono-tamil | 13 | null | transformers | 10,066 | ---
language: ta
license: apache-2.0
---
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~ |
Helsinki-NLP/opus-mt-af-nl | ec9feaedceda132fa6dfb372e13e54ae3b44c6db | 2021-01-18T07:46:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-nl | 13 | null | transformers | 10,067 | ---
language:
- af
- nl
tags:
- translation
license: apache-2.0
---
### afr-nld
* source group: Afrikaans
* target group: Dutch
* OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.nld | 55.2 | 0.715 |
### System Info:
- hf_name: afr-nld
- source_languages: afr
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'nl']
- src_constituents: {'afr'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: nld
- short_pair: af-nl
- chrF2_score: 0.715
- bleu: 55.2
- brevity_penalty: 0.995
- ref_len: 6710.0
- src_name: Afrikaans
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: nl
- prefer_old: False
- long_pair: afr-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-eo | 9d06171cc502a468fe1c1e0cba28c3f7e5999345 | 2021-01-18T07:47:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-eo | 13 | null | transformers | 10,068 | ---
language:
- ar
- eo
tags:
- translation
license: apache-2.0
---
### ara-epo
* source group: Arabic
* target group: Esperanto
* OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md)
* model: transformer-align
* source language(s): apc apc_Latn ara arq arq_Latn arz
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.epo | 18.9 | 0.376 |
### System Info:
- hf_name: ara-epo
- source_languages: ara
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'eo']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt
- src_alpha3: ara
- tgt_alpha3: epo
- short_pair: ar-eo
- chrF2_score: 0.376
- bleu: 18.9
- brevity_penalty: 0.948
- ref_len: 4506.0
- src_name: Arabic
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ar
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ara-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-es | 59a05c4cc9958d8c77a35770747179042ddaf866 | 2021-01-18T07:50:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bg-es | 13 | null | transformers | 10,069 | ---
language:
- bg
- es
tags:
- translation
license: apache-2.0
---
### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md)
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.spa | 49.1 | 0.661 |
### System Info:
- hf_name: bul-spa
- source_languages: bul
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'es']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: spa
- short_pair: bg-es
- chrF2_score: 0.6609999999999999
- bleu: 49.1
- brevity_penalty: 0.992
- ref_len: 1783.0
- src_name: Bulgarian
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: es
- prefer_old: False
- long_pair: bul-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cpf-en | a472a7fb0ba82e12b9eeb894208d35cfcbdf2a5c | 2021-01-18T07:54:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ht",
"cpf",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cpf-en | 13 | null | transformers | 10,070 | ---
language:
- ht
- cpf
- en
tags:
- translation
license: apache-2.0
---
### cpf-eng
* source group: Creoles and pidgins, French‑based
* target group: English
* OPUS readme: [cpf-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md)
* model: transformer
* source language(s): gcf_Latn hat mfe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gcf-eng.gcf.eng | 8.4 | 0.229 |
| Tatoeba-test.hat-eng.hat.eng | 28.0 | 0.421 |
| Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.808 |
| Tatoeba-test.multi.eng | 16.3 | 0.323 |
### System Info:
- hf_name: cpf-eng
- source_languages: cpf
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ht', 'cpf', 'en']
- src_constituents: {'gcf_Latn', 'hat', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpf
- tgt_alpha3: eng
- short_pair: cpf-en
- chrF2_score: 0.32299999999999995
- bleu: 16.3
- brevity_penalty: 1.0
- ref_len: 990.0
- src_name: Creoles and pidgins, French‑based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpf
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpf-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cs-fi | d60a357cfb2c4d1df38b43f2fafe34dbff0199cf | 2021-09-09T21:29:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"cs",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cs-fi | 13 | null | transformers | 10,071 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-cs-fi
* source languages: cs
* target languages: fi
* OPUS readme: [cs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/cs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/cs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.cs.fi | 25.5 | 0.523 |
|
Helsinki-NLP/opus-mt-cus-en | 31379ae0b18a3b54e22354f6b183c90b4669133c | 2021-01-18T07:56:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"so",
"cus",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cus-en | 13 | null | transformers | 10,072 | ---
language:
- so
- cus
- en
tags:
- translation
license: apache-2.0
---
### cus-eng
* source group: Cushitic languages
* target group: English
* OPUS readme: [cus-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cus-eng/README.md)
* model: transformer
* source language(s): som
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cus-eng/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cus-eng/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cus-eng/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 16.2 | 0.303 |
| Tatoeba-test.som-eng.som.eng | 16.2 | 0.303 |
### System Info:
- hf_name: cus-eng
- source_languages: cus
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cus-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'cus', 'en']
- src_constituents: {'som'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cus-eng/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cus-eng/opus-2020-07-26.test.txt
- src_alpha3: cus
- tgt_alpha3: eng
- short_pair: cus-en
- chrF2_score: 0.303
- bleu: 16.2
- brevity_penalty: 1.0
- ref_len: 3.0
- src_name: Cushitic languages
- tgt_name: English
- train_date: 2020-07-26
- src_alpha2: cus
- tgt_alpha2: en
- prefer_old: False
- long_pair: cus-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-aav | 1c5e89073685e22bb10281228f0ed055d2d219f0 | 2021-01-18T08:04:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"km",
"aav",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-aav | 13 | null | transformers | 10,073 | ---
language:
- en
- vi
- km
- aav
tags:
- translation
license: apache-2.0
---
### eng-aav
* source group: English
* target group: Austro-Asiatic languages
* OPUS readme: [eng-aav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md)
* model: transformer
* source language(s): eng
* target language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-hoc.eng.hoc | 0.1 | 0.033 |
| Tatoeba-test.eng-kha.eng.kha | 0.4 | 0.043 |
| Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.242 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.8 | 0.003 |
| Tatoeba-test.eng.multi | 16.1 | 0.311 |
| Tatoeba-test.eng-vie.eng.vie | 33.2 | 0.508 |
### System Info:
- hf_name: eng-aav
- source_languages: eng
- target_languages: aav
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi', 'km', 'aav']
- src_constituents: {'eng'}
- tgt_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: aav
- short_pair: en-aav
- chrF2_score: 0.311
- bleu: 16.1
- brevity_penalty: 1.0
- ref_len: 38261.0
- src_name: English
- tgt_name: Austro-Asiatic languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: aav
- prefer_old: False
- long_pair: eng-aav
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-ee | 45d6ef20f2aac6de3ad001d7452ff5243f25f219 | 2021-09-09T21:34:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ee",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ee | 13 | null | transformers | 10,074 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ee
* source languages: en
* target languages: ee
* OPUS readme: [en-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ee | 38.2 | 0.591 |
| Tatoeba.en.ee | 6.0 | 0.347 |
|
Helsinki-NLP/opus-mt-en-hil | 267ef597c368d371a08059d5f61533751560e551 | 2021-09-09T21:35:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"hil",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-hil | 13 | null | transformers | 10,075 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-hil
* source languages: en
* target languages: hil
* OPUS readme: [en-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.hil | 49.4 | 0.696 |
|
Helsinki-NLP/opus-mt-en-umb | c34d42274e13a5f29db4c522b096d9df266519b6 | 2021-09-09T21:40:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"umb",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-umb | 13 | null | transformers | 10,076 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-umb
* source languages: en
* target languages: umb
* OPUS readme: [en-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.umb | 28.6 | 0.510 |
|
Helsinki-NLP/opus-mt-es-ln | e64975ad87ce6d72dcbd26717511a9874dc32e2d | 2021-09-09T21:43:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ln",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ln | 13 | null | transformers | 10,077 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-ln
* source languages: es
* target languages: ln
* OPUS readme: [es-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ln | 27.1 | 0.508 |
|
Helsinki-NLP/opus-mt-es-no | c58e9d5877681252347b252228ebbf817db5c74f | 2021-01-18T08:27:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-no | 13 | null | transformers | 10,078 | ---
language:
- es
- no
tags:
- translation
license: apache-2.0
---
### spa-nor
* source group: Spanish
* target group: Norwegian
* OPUS readme: [spa-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.nor | 36.7 | 0.565 |
### System Info:
- hf_name: spa-nor
- source_languages: spa
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'no']
- src_constituents: {'spa'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: nor
- short_pair: es-no
- chrF2_score: 0.565
- bleu: 36.7
- brevity_penalty: 0.99
- ref_len: 7217.0
- src_name: Spanish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: no
- prefer_old: False
- long_pair: spa-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-pag | de0ad68d5ca5892045aa81a664de3149cc31672d | 2021-09-09T21:44:02.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pag",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-pag | 13 | null | transformers | 10,079 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pag
* source languages: es
* target languages: pag
* OPUS readme: [es-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pag/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pag | 25.3 | 0.478 |
|
Helsinki-NLP/opus-mt-es-tll | 706bc35a735161e30c903a0ae8371d0230a97877 | 2021-09-09T21:45:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"tll",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-tll | 13 | null | transformers | 10,080 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tll
* source languages: es
* target languages: tll
* OPUS readme: [es-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tll | 20.7 | 0.434 |
|
Helsinki-NLP/opus-mt-fi-to | 59f84c6b714b2a3feffccf04ae9216301cbdd523 | 2021-09-09T21:51:28.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"to",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-to | 13 | null | transformers | 10,081 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-to
* source languages: fi
* target languages: to
* OPUS readme: [fi-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.to | 38.3 | 0.541 |
|
Helsinki-NLP/opus-mt-fi-toi | dc71164e23f72608e63af9a027fc788f902f0885 | 2021-09-09T21:51:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"toi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-toi | 13 | null | transformers | 10,082 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-toi
* source languages: fi
* target languages: toi
* OPUS readme: [fi-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-toi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.toi | 22.0 | 0.509 |
|
Helsinki-NLP/opus-mt-fi-tvl | 031941e69bf076ce666e87403a53d4c28899b232 | 2021-09-09T21:51:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"tvl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-tvl | 13 | null | transformers | 10,083 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-tvl
* source languages: fi
* target languages: tvl
* OPUS readme: [fi-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.tvl | 33.6 | 0.517 |
|
Helsinki-NLP/opus-mt-fr-ht | b7df99936fcdb6848df4b718933a50abfac0295b | 2021-09-09T21:54:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ht",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ht | 13 | null | transformers | 10,084 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-ht
* source languages: fr
* target languages: ht
* OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ht | 29.2 | 0.461 |
|
Helsinki-NLP/opus-mt-fr-lg | 153cf928e257dd95fd604787d758f20b917ebc48 | 2021-09-09T21:55:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"lg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-lg | 13 | null | transformers | 10,085 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-lg
* source languages: fr
* target languages: lg
* OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lg | 21.7 | 0.454 |
|
Helsinki-NLP/opus-mt-gaa-es | 8b2b0fe504ac4fe48f6811d90f259c9c4c5bcf63 | 2021-09-09T21:58:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gaa",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gaa-es | 13 | null | transformers | 10,086 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gaa-es
* source languages: gaa
* target languages: es
* OPUS readme: [gaa-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.es | 28.6 | 0.463 |
|
Helsinki-NLP/opus-mt-gmq-gmq | 8290f4f182703169c2763f991d79615d5f561507 | 2021-01-18T08:52:55.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gmq-gmq | 13 | null | transformers | 10,087 | ---
language:
- da
- nb
- sv
- is
- nn
- fo
- gmq
tags:
- translation
license: apache-2.0
---
### gmq-gmq
* source group: North Germanic languages
* target group: North Germanic languages
* OPUS readme: [gmq-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md)
* model: transformer
* source language(s): dan fao isl nno nob swe
* target language(s): dan fao isl nno nob swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan-fao.dan.fao | 8.1 | 0.173 |
| Tatoeba-test.dan-isl.dan.isl | 52.5 | 0.827 |
| Tatoeba-test.dan-nor.dan.nor | 62.8 | 0.772 |
| Tatoeba-test.dan-swe.dan.swe | 67.6 | 0.802 |
| Tatoeba-test.fao-dan.fao.dan | 11.3 | 0.306 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.359 |
| Tatoeba-test.fao-nor.fao.nor | 36.8 | 0.531 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.isl-dan.isl.dan | 67.0 | 0.739 |
| Tatoeba-test.isl-fao.isl.fao | 14.5 | 0.243 |
| Tatoeba-test.isl-nor.isl.nor | 51.8 | 0.674 |
| Tatoeba-test.isl-swe.isl.swe | 100.0 | 1.000 |
| Tatoeba-test.multi.multi | 64.7 | 0.782 |
| Tatoeba-test.nor-dan.nor.dan | 65.6 | 0.797 |
| Tatoeba-test.nor-fao.nor.fao | 9.4 | 0.362 |
| Tatoeba-test.nor-isl.nor.isl | 38.8 | 0.587 |
| Tatoeba-test.nor-nor.nor.nor | 51.9 | 0.721 |
| Tatoeba-test.nor-swe.nor.swe | 66.5 | 0.789 |
| Tatoeba-test.swe-dan.swe.dan | 67.6 | 0.802 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.268 |
| Tatoeba-test.swe-isl.swe.isl | 65.8 | 0.914 |
| Tatoeba-test.swe-nor.swe.nor | 60.6 | 0.755 |
### System Info:
- hf_name: gmq-gmq
- source_languages: gmq
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt
- src_alpha3: gmq
- tgt_alpha3: gmq
- short_pair: gmq-gmq
- chrF2_score: 0.782
- bleu: 64.7
- brevity_penalty: 0.9940000000000001
- ref_len: 49385.0
- src_name: North Germanic languages
- tgt_name: North Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmq
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: gmq-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-he-ar | dddf2b012cce6cf01d81bb4329599dd5803988b7 | 2021-01-18T08:54:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"ar",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-ar | 13 | null | transformers | 10,088 | ---
language:
- he
- ar
tags:
- translation
license: apache-2.0
---
### heb-ara
* source group: Hebrew
* target group: Arabic
* OPUS readme: [heb-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md)
* model: transformer
* source language(s): heb
* target language(s): apc apc_Latn ara arq arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ara | 23.6 | 0.532 |
### System Info:
- hf_name: heb-ara
- source_languages: heb
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ar']
- src_constituents: {'heb'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt
- src_alpha3: heb
- tgt_alpha3: ara
- short_pair: he-ar
- chrF2_score: 0.532
- bleu: 23.6
- brevity_penalty: 0.9259999999999999
- ref_len: 6372.0
- src_name: Hebrew
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: he
- tgt_alpha2: ar
- prefer_old: False
- long_pair: heb-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-hy-ru | 69acb082c08e3b4bd067fb334ca0298bf770c6bd | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hy",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hy-ru | 13 | null | transformers | 10,089 | ---
language:
- hy
- ru
tags:
- translation
license: apache-2.0
---
### hye-rus
* source group: Armenian
* target group: Russian
* OPUS readme: [hye-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hye-rus/README.md)
* model: transformer-align
* source language(s): hye hye_Latn
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hye.rus | 25.6 | 0.476 |
### System Info:
- hf_name: hye-rus
- source_languages: hye
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hye-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hy', 'ru']
- src_constituents: {'hye', 'hye_Latn'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.test.txt
- src_alpha3: hye
- tgt_alpha3: rus
- short_pair: hy-ru
- chrF2_score: 0.47600000000000003
- bleu: 25.6
- brevity_penalty: 0.929
- ref_len: 1624.0
- src_name: Armenian
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: hy
- tgt_alpha2: ru
- prefer_old: False
- long_pair: hye-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-is-fr | 75102dcfbdfe7ce7b69153515d5b15af4a938f96 | 2021-09-09T22:12:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"is",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-is-fr | 13 | null | transformers | 10,090 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-is-fr
* source languages: is
* target languages: fr
* OPUS readme: [is-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.fr | 25.0 | 0.437 |
|
Helsinki-NLP/opus-mt-lv-es | ebccaa985f4d6e974e98021b8ac8fbff6c8a9e75 | 2021-09-10T13:57:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lv",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lv-es | 13 | null | transformers | 10,091 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lv-es
* source languages: lv
* target languages: es
* OPUS readme: [lv-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.es | 21.7 | 0.433 |
|
Helsinki-NLP/opus-mt-mk-es | 1614815ad3cc7e02eaa3db8fe46f227a886ec21e | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"mk",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-mk-es | 13 | null | transformers | 10,092 | ---
language:
- mk
- es
tags:
- translation
license: apache-2.0
---
### mkd-spa
* source group: Macedonian
* target group: Spanish
* OPUS readme: [mkd-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md)
* model: transformer-align
* source language(s): mkd
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.mkd.spa | 56.5 | 0.717 |
### System Info:
- hf_name: mkd-spa
- source_languages: mkd
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mk', 'es']
- src_constituents: {'mkd'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt
- src_alpha3: mkd
- tgt_alpha3: spa
- short_pair: mk-es
- chrF2_score: 0.7170000000000001
- bleu: 56.5
- brevity_penalty: 0.997
- ref_len: 1121.0
- src_name: Macedonian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: mk
- tgt_alpha2: es
- prefer_old: False
- long_pair: mkd-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-nl-sv | 47ba429d23cbfba9ddb30052519c9b73e5f5aa6d | 2021-09-10T13:59:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nl-sv | 13 | null | transformers | 10,093 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-sv
* source languages: nl
* target languages: sv
* OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.nl.sv | 25.0 | 0.518 |
|
Helsinki-NLP/opus-mt-pis-fi | 260ae44f3957e5228146e485c58e70feda2fe86b | 2021-09-10T14:00:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pis",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pis-fi | 13 | null | transformers | 10,094 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pis-fi
* source languages: pis
* target languages: fi
* OPUS readme: [pis-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fi | 21.8 | 0.439 |
|
Helsinki-NLP/opus-mt-ru-et | 02c1f06b0d8f9a327d7a6963f2939a4d616b0090 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"et",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-et | 13 | null | transformers | 10,095 | ---
language:
- ru
- et
tags:
- translation
license: apache-2.0
---
### rus-est
* source group: Russian
* target group: Estonian
* OPUS readme: [rus-est](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): est
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.est | 57.5 | 0.749 |
### System Info:
- hf_name: rus-est
- source_languages: rus
- target_languages: est
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'et']
- src_constituents: {'rus'}
- tgt_constituents: {'est'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: est
- short_pair: ru-et
- chrF2_score: 0.7490000000000001
- bleu: 57.5
- brevity_penalty: 0.975
- ref_len: 3572.0
- src_name: Russian
- tgt_name: Estonian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: et
- prefer_old: False
- long_pair: rus-est
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ru-fi | 7fe0f6871ae51da015981ee6db8a771f24aaf124 | 2021-09-10T14:02:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-fi | 13 | null | transformers | 10,096 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ru-fi
* source languages: ru
* target languages: fi
* OPUS readme: [ru-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ru.fi | 40.1 | 0.646 |
|
Helsinki-NLP/opus-mt-ru-lt | d54836cc4d595f8f927cf75220f9035deb89ffb7 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"lt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-lt | 13 | null | transformers | 10,097 | ---
language:
- ru
- lt
tags:
- translation
license: apache-2.0
---
### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: [rus-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lit | 43.5 | 0.675 |
### System Info:
- hf_name: rus-lit
- source_languages: rus
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lt']
- src_constituents: {'rus'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lit
- short_pair: ru-lt
- chrF2_score: 0.675
- bleu: 43.5
- brevity_penalty: 0.937
- ref_len: 14406.0
- src_name: Russian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lt
- prefer_old: False
- long_pair: rus-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sg-fi | 25517cc876663487dbc3973b357ff08cb6bbef69 | 2021-09-10T14:03:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sg",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sg-fi | 13 | null | transformers | 10,098 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-fi
* source languages: sg
* target languages: fi
* OPUS readme: [sg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.fi | 22.7 | 0.438 |
|
Helsinki-NLP/opus-mt-sv-ro | fdf7f7eddc33cb6a58a6a56631f0d4dfca3686b1 | 2021-09-10T14:08:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"ro",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-ro | 13 | null | transformers | 10,099 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ro
* source languages: sv
* target languages: ro
* OPUS readme: [sv-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ro | 29.5 | 0.510 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.