modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head | dc69690b1b09df5edc6ff779f06b069239ecb1a3 | 2022-07-23T14:44:12.000Z | [
"pytorch",
"deberta-v2",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| question-answering | false | KoichiYasuoka | null | KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head | 16 | null | transformers | 9,400 | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-large-japanese-wikipedia-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-large-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head")
question="国語"
context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
inputs=tokenizer(question,context,return_tensors="pt",return_offsets_mapping=True)
offsets=inputs.pop("offset_mapping").tolist()[0]
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(context[offsets[start][0]:offsets[end][-1]])
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-large-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
ArneD/xlm-roberta-base-finetuned-panx-de | 3bd67234019724b0b9ba066e85fa83233e793041 | 2022-07-06T07:23:24.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ArneD | null | ArneD/xlm-roberta-base-finetuned-panx-de | 16 | null | transformers | 9,401 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bhadresh-savani/distilbert-base-uncased-finetuned-emotion | 11350faca8e85c4861766cec4c30dec55fd06bb9 | 2022-07-14T06:59:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | bhadresh-savani | null | bhadresh-savani/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,402 | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
- name: F1
type: f1
value: 0.9371
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.923
verified: true
- name: Precision Macro
type: precision
value: 0.8676576686813523
verified: true
- name: Precision Micro
type: precision
value: 0.923
verified: true
- name: Precision Weighted
type: precision
value: 0.9268406401714973
verified: true
- name: Recall Macro
type: recall
value: 0.8945488803260702
verified: true
- name: Recall Micro
type: recall
value: 0.923
verified: true
- name: Recall Weighted
type: recall
value: 0.923
verified: true
- name: F1 Macro
type: f1
value: 0.8798961895301041
verified: true
- name: F1 Micro
type: f1
value: 0.923
verified: true
- name: F1 Weighted
type: f1
value: 0.9241278880972197
verified: true
- name: loss
type: loss
value: 0.24626904726028442
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1995
- Accuracy: 0.9365
- F1: 0.9371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.475 | 1.0 | 503 | 0.2171 | 0.928 | 0.9292 |
| 0.1235 | 2.0 | 1006 | 0.1764 | 0.9365 | 0.9372 |
| 0.0802 | 3.0 | 1509 | 0.1788 | 0.938 | 0.9388 |
| 0.0531 | 4.0 | 2012 | 0.2005 | 0.938 | 0.9388 |
| 0.0367 | 5.0 | 2515 | 0.1995 | 0.9365 | 0.9371 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tj-solergibert/distilbert-base-uncased-finetuned-emotion | 38145055ffaf8d9d170279a7a206d4977eab3d9d | 2022-07-11T21:58:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tj-solergibert | null | tj-solergibert/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,403 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.9285646975197546
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9285
- F1: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3085 | 0.915 | 0.9127 |
| 0.2493 | 2.0 | 500 | 0.2158 | 0.9285 | 0.9286 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
userGagan/segformer-b0-finetuned-segments-sidewalk-3 | 7885a555d2b3db4d72e38194169ffbb9dd9267de | 2022-07-14T04:32:19.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers"
]
| null | false | userGagan | null | userGagan/segformer-b0-finetuned-segments-sidewalk-3 | 16 | null | transformers | 9,404 | Entry not found |
juliensimon/distilbert-base-uncased-finetuned-cola | 314e8179c7d225ac0e135183caba52dcda653ca9 | 2022-07-12T14:05:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | juliensimon | null | juliensimon/distilbert-base-uncased-finetuned-cola | 16 | null | transformers | 9,405 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5334876461854267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Matthews Correlation: 0.5335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5225 | 1.0 | 535 | 0.5170 | 0.4007 |
| 0.3509 | 2.0 | 1070 | 0.5220 | 0.4837 |
| 0.2405 | 3.0 | 1605 | 0.6164 | 0.5186 |
| 0.1777 | 4.0 | 2140 | 0.7737 | 0.5335 |
| 0.1295 | 5.0 | 2675 | 0.8374 | 0.5162 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
omarxadel/wav2vec2-large-xlsr-53-arabic-egyptian | 6d87f4de2e66a964f6fb19790092360ede673c57 | 2022-07-12T14:40:36.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:MGB-3",
"dataset:egyptian-arabic-conversational-speech-corpus",
"transformers",
"CTC",
"Attention",
"Transformer",
"license:cc-by-nc-4.0",
"model-index"
]
| automatic-speech-recognition | false | omarxadel | null | omarxadel/wav2vec2-large-xlsr-53-arabic-egyptian | 16 | null | transformers | 9,406 | ---
language: "ar"
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- Transformer
license: "cc-by-nc-4.0"
datasets:
- MGB-3
- egyptian-arabic-conversational-speech-corpus
metrics:
- wer
model-index:
- name: omarxadel/hubert-large-arabic-egyptian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 29.3755
- name: Validation WER
type: wer
value: 29.1828
---
# Wav2Vec2-XLSR-53 - with CTC fine-tuned on MGB-3 and Egyptian Arabic Conversational Speech Corpus (No LM)
This model is a fine-tuned version of [Wav2Vec2-XLSR-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). We finetuned this model on the MGB-3 and Egyptian Arabic Conversational Speech Corpus datasets, acheiving WER of `29.3755%`.
The performance of the model on the datasets is the following:
| Valid WER | Test WER |
|:---------:|:--------:|
| 29.18 | 29.37 |
# Acknowledgement
Model fine-tuning and data processing for this work were performed as a part of a Graduation Project from Faculty of Engineering, Alexandria University, CCE Program. |
huggingtweets/scottduncanwx | e6542ec5ddde78c6b36050e3c0f3e87bccfd3da6 | 2022-07-12T14:43:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/scottduncanwx | 16 | 1 | transformers | 9,407 | ---
language: en
thumbnail: http://www.huggingtweets.com/scottduncanwx/1657637010818/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535379125296418821/ntSMv4LC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scott Duncan</div>
<div style="text-align: center; font-size: 14px;">@scottduncanwx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scott Duncan.
| Data | Scott Duncan |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 186 |
| Short tweets | 223 |
| Tweets kept | 2841 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tziokng8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scottduncanwx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2swonujn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2swonujn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scottduncanwx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Loc/lucky-model | 4a9b4b268227f847173d63ac90db746de7fc9566 | 2022-07-13T07:06:05.000Z | [
"pytorch",
"tf",
"jax",
"vit",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2010.11929",
"arxiv:2006.03677",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | Loc | null | Loc/lucky-model | 16 | null | transformers | 9,408 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
ClassCat/gpt2-small-catalan-v2 | 6561d0e78fa1726176f84513d8becef7fa914007 | 2022-07-16T11:35:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ca",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0"
]
| text-generation | false | ClassCat | null | ClassCat/gpt2-small-catalan-v2 | 16 | 1 | transformers | 9,409 | ---
language: ca
license: cc-by-sa-4.0
datasets:
- cc100
- oscar
- wikipedia
widget:
- text: "Vas jugar a"
- text: "M'agrada el clima i el menjar"
- text: "Ell està una mica"
---
## GPT2 Catalan small model Version 2 (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses GPT2 base model settings, but the size of embedding dimensions are half the size of them.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/ca](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bca) (Catalan Wikipedia)
* Subset of [oscar](https://huggingface.co/datasets/oscar)
* Subset of [CC-100/ca](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/gpt2-small-catalan-v2')
unmasker("Ell està una mica")
``` |
peerawatchomp/t5-base-grammar-mcq | c1def3a1894a97b60c1262daf6b816179921276a | 2022-07-14T09:30:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | peerawatchomp | null | peerawatchomp/t5-base-grammar-mcq | 16 | null | transformers | 9,410 | ---
license: mit
---
|
yochen/distilroberta-base-wiki-mark | b5be2720718f457f9e5c2f10b26d14a72154b280 | 2022-07-25T09:49:23.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | yochen | null | yochen/distilroberta-base-wiki-mark | 16 | null | transformers | 9,411 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-wiki-mark
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wiki-mark
This model is a fine-tuned version of [yochen/distilroberta-base-wiki-mark](https://huggingface.co/yochen/distilroberta-base-wiki-mark) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2695
- eval_runtime: 4.3489
- eval_samples_per_second: 431.836
- eval_steps_per_second: 54.037
- epoch: 10.1
- step: 20489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5000
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-tydiqa-goldp | 33ef613a5d626a620bbea29069fcf30797d2d5d4 | 2022-07-14T12:54:13.000Z | [
"pytorch",
"pixel",
"question-answering",
"dataset:tydiqa",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| question-answering | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-tydiqa-goldp | 16 | null | transformers | 9,412 | ---
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: pixel-base-finetuned-tydiqa-goldp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-tydiqa-goldp
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
jinwooChoi/SKKU_AP_SA_KBT2 | 62ededa202f736fa875bb072e68f3d5b0941fb30 | 2022-07-25T06:59:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | jinwooChoi | null | jinwooChoi/SKKU_AP_SA_KBT2 | 16 | null | transformers | 9,413 | Entry not found |
AbhirupGhosh/opus-mt-finetuned-hi-en | 2de898dab31cb37b0d9b0234b6297fa67bea10f5 | 2022-07-16T17:29:33.000Z | [
"pytorch",
"tf",
"marian",
"text2text-generation",
"hi",
"en",
"arxiv:1706.03762",
"transformers",
"translation",
"Hindi",
"generated_from_keras_callback",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| translation | false | AbhirupGhosh | null | AbhirupGhosh/opus-mt-finetuned-hi-en | 16 | null | transformers | 9,414 | ---
license: apache-2.0
language:
- hi
- en
tags:
- translation
- Hindi
- generated_from_keras_callback
model-index:
- name: opus-mt-finetuned-hi-en
results: []
---
# opus-mt-finetuned-hi-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
## Model description
The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need et al
## Training and evaluation data
More information needed
## Training procedure
The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Prafuld3/distilbert-base-uncased-finetuned-emotion | a19386973207ae4f836d4c061b6707046d76b9b9 | 2022-07-18T08:17:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Prafuld3 | null | Prafuld3/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,415 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232089605669606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2185
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8274 | 1.0 | 250 | 0.3172 | 0.907 | 0.9036 |
| 0.2501 | 2.0 | 500 | 0.2185 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Fagen/OxxxyBlok | 7e8979ebf8d10868747928bedd7e34a2215d8c03 | 2022-07-18T14:46:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:unlicense"
]
| text-generation | false | Fagen | null | Fagen/OxxxyBlok | 16 | null | transformers | 9,416 | ---
license: unlicense
---
|
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi-news | ea3bcd4ddd1278d321ab52c89046a94681ea9ed4 | 2022-07-19T14:09:31.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:multi_news",
"transformers",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Atharvgarg | null | Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi-news | 16 | 1 | transformers | 9,417 | ---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 38.9616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi-news
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0185
- Rouge1: 38.9616
- Rouge2: 14.1539
- Rougel: 21.1788
- Rougelsum: 35.314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.3679 | 1.0 | 11243 | 3.1314 | 38.4459 | 13.7777 | 20.8772 | 34.8321 |
| 3.1115 | 2.0 | 22486 | 3.0589 | 38.7419 | 13.9355 | 20.9911 | 35.0988 |
| 2.9826 | 3.0 | 33729 | 3.0311 | 38.7345 | 14.0365 | 21.0571 | 35.1604 |
| 2.8986 | 4.0 | 44972 | 3.0185 | 38.9616 | 14.1539 | 21.1788 | 35.314 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sam34738/xlm-kabita | d2137a0ad8a1b03ac8db1f5266f7ed97c58af3d7 | 2022-07-19T17:36:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | sam34738 | null | sam34738/xlm-kabita | 16 | null | transformers | 9,418 | ---
tags:
- generated_from_trainer
model-index:
- name: xlm-kabita
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-kabita
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-emotion](https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0929 | 1.0 | 460 | 0.5814 |
| 0.4287 | 2.0 | 920 | 0.4984 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
Ahmed007/T5-ibn-Shaddad | e0f6a904bc9c7ad18e80fe736c429ed18a85e554 | 2022-07-20T11:31:51.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Ahmed007 | null | Ahmed007/T5-ibn-Shaddad | 16 | null | transformers | 9,419 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: T5-ibn-Shaddad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-ibn-Shaddad
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0365 | 1.0 | 4989 | 0.0342 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
buddhist-nlp/classical-tibetan-english | ee0c403b60aadfe2faef10cd478cd19e1b64fe3e | 2022-07-21T14:00:47.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | buddhist-nlp | null | buddhist-nlp/classical-tibetan-english | 16 | null | transformers | 9,420 | Entry not found |
helliun/article_pol | 554083e6e79c5ec4e8d6e354ff1cbf8d0a9e3667 | 2022-07-26T19:52:04.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | helliun | null | helliun/article_pol | 16 | null | transformers | 9,421 | Entry not found |
benjamyu/autotrain-ms-2-1174443640 | 935af3eced3f2d3c6378b3a8e7b9f16fa323dd09 | 2022-07-25T13:26:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:benjamyu/autotrain-data-ms-2",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | benjamyu | null | benjamyu/autotrain-ms-2-1174443640 | 16 | null | transformers | 9,422 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- benjamyu/autotrain-data-ms-2
co2_eq_emissions: 4.619328856849087
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1174443640
- CO2 Emissions (in grams): 4.619328856849087
## Validation Metrics
- Loss: 2.689530849456787
- Rouge1: 15.9713
- Rouge2: 2.1067
- RougeL: 12.1778
- RougeLsum: 13.5772
- Gen Len: 18.9798
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/benjamyu/autotrain-ms-2-1174443640
``` |
sysresearch101/t5-large-finetuned-xsum-cnn | 0fd59c0db86eab4b8354fb4ccf1dc664d515c38d | 2022-07-29T08:06:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sysresearch101 | null | sysresearch101/t5-large-finetuned-xsum-cnn | 16 | null | transformers | 9,423 | Entry not found |
BramVanroy/bert-base-dutch-cased-hebban-reviews | 23ba004d071f15e144e58ae01454b139e2db27d9 | 2022-07-29T09:36:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
]
| text-classification | false | BramVanroy | null | BramVanroy/bert-base-dutch-cased-hebban-reviews | 16 | null | transformers | 9,424 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: bert-base-dutch-cased-hebban-reviews
results:
- dataset:
config: filtered_sentiment
name: BramVanroy/hebban-reviews - filtered_sentiment - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.8042406311637081
- name: Test f1
type: f1
value: 0.8125977499178383
- name: Test precision
type: precision
value: 0.8283602308368182
- name: Test qwk
type: qwk
value: 0.7301452890386257
- name: Test recall
type: recall
value: 0.8042406311637081
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# bert-base-dutch-cased-hebban-reviews
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_sentiment
- dataset_revision: 2.0.0
- labelcolumn: review_sentiment
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.732569302631819
- best_model_checkpoint: trained/hebban-reviews/bert-base-dutch-cased/checkpoint-3000
# Test results of best checkpoint
- accuracy: 0.8042406311637081
- f1: 0.8125977499178383
- precision: 0.8283602308368182
- qwk: 0.7301452890386257
- recall: 0.8042406311637081
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 48bb3434fa8bbfc9b2d0061ca6c8feb87f78a7ef
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
Den4ikAI/rugpt3_2ch | ca577ccc7f995143c4b48cae6a1adf21cf91829d | 2022-07-26T16:43:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"rus",
"transformers",
"license:mit"
]
| text-generation | false | Den4ikAI | null | Den4ikAI/rugpt3_2ch | 16 | 1 | transformers | 9,425 | ---
license: mit
language: rus
---
RUGPT-3 обученная на диалогах с имиджборд по типу 2ch
Для генерации ответа в модель нужно ввести такой формат данных:
"- Привет\n-"
Пример инференса тут: https://github.com/Den4ikAI/rugpt3_2ch |
abdulmatinomotoso/xsum_headline_generator | 399026c6a176223c947f8bac235ee164a24355e8 | 2022-07-27T00:03:58.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | abdulmatinomotoso | null | abdulmatinomotoso/xsum_headline_generator | 16 | null | transformers | 9,426 | ---
tags:
- generated_from_trainer
model-index:
- name: xsum_headline_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xsum_headline_generator
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6447 | 0.8 | 500 | 0.4956 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Go2Heart/BERT_Mod_1 | d8a7bccac709fdabfbf164509c9e2478c1b5e3f2 | 2022-07-27T16:17:44.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Go2Heart | null | Go2Heart/BERT_Mod_1 | 16 | null | transformers | 9,427 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: BERT_Mod_1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541934635424655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Mod_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1787
- Matthews Correlation: 0.5419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1616 | 1.0 | 535 | 0.9278 | 0.4979 |
| 0.1128 | 2.0 | 1070 | 1.0487 | 0.5046 |
| 0.0712 | 3.0 | 1605 | 1.0155 | 0.5306 |
| 0.0952 | 4.0 | 2140 | 1.1860 | 0.5147 |
| 0.0698 | 5.0 | 2675 | 1.1787 | 0.5419 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
relbert/relbert-roberta-large-conceptnet-hc-average-prompt-a-nce | 2d9b5c7d7f9d218e12ac24b06c0a21b412242bef | 2022-07-28T07:17:13.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | relbert | null | relbert/relbert-roberta-large-conceptnet-hc-average-prompt-a-nce | 16 | null | transformers | 9,428 | Entry not found |
bheshaj/bart-large-billsum-epochs20 | cc4e8902cee6387a6c8dc561277aeecb2f3081a0 | 2022-07-28T11:12:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | bheshaj | null | bheshaj/bart-large-billsum-epochs20 | 16 | null | transformers | 9,429 | ---
license: apache-2.0
---
|
yanaiela/roberta-base-epoch_82 | f0007ec36644f58d5197e76ca83bd2945d1ae61d | 2022-07-29T23:09:44.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"transformers",
"roberta-base",
"roberta-base-epoch_82",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | yanaiela | null | yanaiela/roberta-base-epoch_82 | 16 | null | transformers | 9,430 | ---
language: en
tags:
- roberta-base
- roberta-base-epoch_82
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 82
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_82.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Andrija/SRoBERTa | 7edd3d8a779ff36a48f8d23394ddfd5079fc865a | 2021-08-09T19:38:58.000Z | [
"pytorch",
"roberta",
"fill-mask",
"hr",
"sr",
"dataset:leipzig",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Andrija | null | Andrija/SRoBERTa | 15 | 1 | transformers | 9,431 | ---
datasets:
- leipzig
language:
- hr
- sr
tags:
- masked-lm
widget:
- text: "Gde je <mask>."
license: apache-2.0
---
# Transformer language model for Croatian and Serbian
Trained on 0.7GB dataset Croatian and Serbian language for one epoch.
Dataset from Leipzig Corpora.
# Information of dataset
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `Andrija/SRoBERTa` | 120M | First | Leipzig Corpus (0.7 GB of text) |
# How to use in code
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Andrija/SRoBERTa")
model = AutoModelForMaskedLM.from_pretrained("Andrija/SRoBERTa")
``` |
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition | 3430ab406e1b1fdc23284372b19d1bc235d18c67 | 2021-10-20T05:41:55.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"jp",
"dataset:jtes",
"transformers",
"audio",
"speech",
"speech-emotion-recognition"
]
| audio-classification | false | Bagus | null | Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition | 15 | null | transformers | 9,432 | ---
language: jp
datasets:
- jtes
tags:
- audio
- audio-classification
- speech
- speech-emotion-recognition
---
This is for (private) DEMO only. |
BigSalmon/InformalToFormalLincolnDistilledGPT2 | 6d1979b4571951ca14150f004e5045aa0bd9a2c4 | 2021-12-23T03:39:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincolnDistilledGPT2 | 15 | null | transformers | 9,433 | Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincolnDistilledGPT2")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english:
```` |
Buntan/bert-finetuned-ner | e4df0b1baad36dc465cdc99ce21d659300c7d7ce | 2021-12-11T10:26:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Buntan | null | Buntan/bert-finetuned-ner | 15 | null | transformers | 9,434 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9328604420983174
- name: Recall
type: recall
value: 0.9516997643890945
- name: F1
type: f1
value: 0.9421859380206598
- name: Accuracy
type: accuracy
value: 0.986342497203744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9329
- Recall: 0.9517
- F1: 0.9422
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0904 | 1.0 | 1756 | 0.0686 | 0.9227 | 0.9355 | 0.9291 | 0.9820 |
| 0.0385 | 2.0 | 3512 | 0.0586 | 0.9381 | 0.9490 | 0.9435 | 0.9862 |
| 0.0215 | 3.0 | 5268 | 0.0612 | 0.9329 | 0.9517 | 0.9422 | 0.9863 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Cameron/BERT-Jigsaw | fb180e502cbd5b82bc54be2858317eb8cc62c392 | 2021-05-18T17:21:10.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Cameron | null | Cameron/BERT-Jigsaw | 15 | null | transformers | 9,435 | Entry not found |
Contrastive-Tension/BERT-Base-CT | f1cafceb4374b5b374defd6ca7a8391d5b3d58d9 | 2021-05-18T17:49:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Base-CT | 15 | null | transformers | 9,436 | Entry not found |
EhsanYB/bert-ehsan-ner-accelerate | 33d6d1c725c98d56c77a0da5329c07a177b4b458 | 2022-01-14T10:50:23.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | EhsanYB | null | EhsanYB/bert-ehsan-ner-accelerate | 15 | null | transformers | 9,437 | Entry not found |
GeniusVoice/bert-base-dutch-cased-finetuned-gem | ca9b0b4e758ee2138e02db8595c8b19be436d412 | 2021-07-13T14:06:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"nl",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | false | GeniusVoice | null | GeniusVoice/bert-base-dutch-cased-finetuned-gem | 15 | null | transformers | 9,438 | ---
language:
- nl
tags:
- generated_from_trainer
model_index:
- name: bert-base-dutch-cased-finetuned-gem
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-gem
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7518 | 1.0 | 2133 | 1.8428 |
| 1.5679 | 2.0 | 4266 | 1.8729 |
| 1.3332 | 3.0 | 6399 | 1.8767 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
Geotrend/bert-base-de-cased | d54397485533eb9eb0171915fcb720c47f8472c3 | 2021-05-18T18:58:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"de",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-de-cased | 15 | null | transformers | 9,439 | ---
language: de
datasets: wikipedia
license: apache-2.0
---
# bert-base-de-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-de-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/bert-base-en-tr-cased | 68e431759fc6bd131395b23376b8a39837291fbd | 2021-05-18T19:48:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/bert-base-en-tr-cased | 15 | null | transformers | 9,440 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-tr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-tr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/distilbert-base-ar-cased | a82a605cb4f8cd866483123be3c5283c811ff456 | 2021-08-16T13:19:01.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"ar",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | Geotrend | null | Geotrend/distilbert-base-ar-cased | 15 | null | transformers | 9,441 | ---
language: ar
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ar-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ar-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Harveenchadha/hindi_large_wav2vec2 | 0053bf147465b0101391b4d5a7dd7777f58d4230 | 2022-03-23T18:28:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:Harveenchadha/indic-voice",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/hindi_large_wav2vec2 | 15 | null | transformers | 9,442 | ---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 23.08
- name: Test CER
type: cer
value: 8.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 23.36
- name: Test CER
type: cer
value: 8.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 24.85
- name: Test CER
type: cer
value: 9.99
---
|
Helsinki-NLP/opus-mt-art-en | a4c1384d26ca671492bfa97342442040305f8c0e | 2021-01-18T07:47:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"eo",
"io",
"art",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-art-en | 15 | null | transformers | 9,443 | ---
language:
- eo
- io
- art
- en
tags:
- translation
license: apache-2.0
---
### art-eng
* source group: Artificial languages
* target group: English
* OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md)
* model: transformer
* source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 |
| Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 |
| Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 |
| Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 |
| Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 |
| Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 |
| Tatoeba-test.multi.eng | 11.6 | 0.287 |
| Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 |
| Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 |
| Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 |
| Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 |
| Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 |
### System Info:
- hf_name: art-eng
- source_languages: art
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'io', 'art', 'en']
- src_constituents: {'sjn_Latn', 'tzl', 'vol_Latn', 'qya', 'tlh_Latn', 'ile_Latn', 'ido_Latn', 'tzl_Latn', 'jbo_Cyrl', 'jbo', 'lfn_Latn', 'nov_Latn', 'dws_Latn', 'ldn_Latn', 'avk_Latn', 'lfn_Cyrl', 'ina_Latn', 'jbo_Latn', 'epo', 'afh_Latn', 'qya_Latn', 'ido'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt
- src_alpha3: art
- tgt_alpha3: eng
- short_pair: art-en
- chrF2_score: 0.287
- bleu: 11.6
- brevity_penalty: 1.0
- ref_len: 73037.0
- src_name: Artificial languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: art
- tgt_alpha2: en
- prefer_old: False
- long_pair: art-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ceb-es | 94ff5e6902541d95fc1890e7e5e185477d922271 | 2021-09-09T21:28:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ceb",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ceb-es | 15 | null | transformers | 9,444 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ceb-es
* source languages: ceb
* target languages: es
* OPUS readme: [ceb-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.es | 31.6 | 0.508 |
|
Helsinki-NLP/opus-mt-chk-sv | de1bf0196adc388148bb52c5388fd795c46191b6 | 2021-09-09T21:28:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"chk",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-chk-sv | 15 | null | transformers | 9,445 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-chk-sv
* source languages: chk
* target languages: sv
* OPUS readme: [chk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.sv | 23.6 | 0.406 |
|
Helsinki-NLP/opus-mt-efi-de | cedf2694630c1ee2ea1d75dffead02c4dc49ef80 | 2021-09-09T21:33:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"efi",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-efi-de | 15 | null | transformers | 9,446 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-efi-de
* source languages: efi
* target languages: de
* OPUS readme: [efi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.de | 21.0 | 0.401 |
|
Helsinki-NLP/opus-mt-en-bi | b3e9ed52697fffab06a733a23c37d843a3464976 | 2021-09-09T21:34:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"bi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-bi | 15 | null | transformers | 9,447 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-bi
* source languages: en
* target languages: bi
* OPUS readme: [en-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bi | 36.4 | 0.543 |
|
Helsinki-NLP/opus-mt-en-eo | 20a8920034dfbb6b2e5909f5065a32d6b1b5990b | 2021-09-09T21:35:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-eo | 15 | null | transformers | 9,448 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-eo
* source languages: en
* target languages: eo
* OPUS readme: [en-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.eo | 49.5 | 0.682 |
|
Helsinki-NLP/opus-mt-en-mkh | 6115f953f19da66145fa3f8f54e02516e0272bec | 2021-01-18T08:12:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"km",
"mkh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-mkh | 15 | null | transformers | 9,449 | ---
language:
- en
- vi
- km
- mkh
tags:
- translation
license: apache-2.0
---
### eng-mkh
* source group: English
* target group: Mon-Khmer languages
* OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md)
* model: transformer
* source language(s): eng
* target language(s): kha khm khm_Latn mnw vie vie_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 |
| Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 |
| Tatoeba-test.eng.multi | 16.5 | 0.330 |
| Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 |
### System Info:
- hf_name: eng-mkh
- source_languages: eng
- target_languages: mkh
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi', 'km', 'mkh']
- src_constituents: {'eng'}
- tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: mkh
- short_pair: en-mkh
- chrF2_score: 0.33
- bleu: 16.5
- brevity_penalty: 1.0
- ref_len: 34734.0
- src_name: English
- tgt_name: Mon-Khmer languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: mkh
- prefer_old: False
- long_pair: eng-mkh
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-tiv | 7365b6468e5560ea672006d5b06076b9353f7f08 | 2021-09-09T21:39:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tiv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tiv | 15 | null | transformers | 9,450 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tiv
* source languages: en
* target languages: tiv
* OPUS readme: [en-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tiv | 31.6 | 0.497 |
|
Helsinki-NLP/opus-mt-es-is | c5c5198f9f6adf74222b27f27395f18683cca091 | 2021-01-18T08:25:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"is",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-is | 15 | null | transformers | 9,451 | ---
language:
- es
- is
tags:
- translation
license: apache-2.0
---
### spa-isl
* source group: Spanish
* target group: Icelandic
* OPUS readme: [spa-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): isl
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.isl | 27.1 | 0.528 |
### System Info:
- hf_name: spa-isl
- source_languages: spa
- target_languages: isl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'is']
- src_constituents: {'spa'}
- tgt_constituents: {'isl'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: isl
- short_pair: es-is
- chrF2_score: 0.528
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 1220.0
- src_name: Spanish
- tgt_name: Icelandic
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: is
- prefer_old: False
- long_pair: spa-isl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-mfs | 951cb2605294fb1fcea04c6f1db797405ea64a4d | 2021-09-09T21:43:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"mfs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-mfs | 15 | null | transformers | 9,452 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-mfs
* source languages: es
* target languages: mfs
* OPUS readme: [es-mfs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mfs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mfs | 88.6 | 0.907 |
|
Helsinki-NLP/opus-mt-es-mt | 5c5e7195a17a805eb1371b3124c50b58fed0a7d0 | 2021-09-09T21:43:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"mt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-mt | 15 | null | transformers | 9,453 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-mt
* source languages: es
* target languages: mt
* OPUS readme: [es-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mt | 28.1 | 0.460 |
|
Helsinki-NLP/opus-mt-es-pon | 959fa9d70c13cf7125e75cb5259b840f49ac7153 | 2021-09-09T21:44:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"pon",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-pon | 15 | null | transformers | 9,454 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-pon
* source languages: es
* target languages: pon
* OPUS readme: [es-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pon | 21.6 | 0.448 |
|
Helsinki-NLP/opus-mt-es-prl | b827848e11171694a0673824ceb956120f711790 | 2021-09-09T21:44:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"prl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-prl | 15 | null | transformers | 9,455 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-prl
* source languages: es
* target languages: prl
* OPUS readme: [es-prl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-prl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-prl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-prl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-prl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.prl | 92.2 | 0.950 |
|
Helsinki-NLP/opus-mt-es-rw | ed856c7464b8bf9da5b6c630d3c2fdc44805b33b | 2021-09-09T21:44:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"rw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-rw | 15 | null | transformers | 9,456 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-rw
* source languages: es
* target languages: rw
* OPUS readme: [es-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-rw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-rw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-rw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.rw | 22.6 | 0.472 |
|
Helsinki-NLP/opus-mt-es-tl | eb444bac2f8ed359035320beb28d65f0a77d4886 | 2021-01-18T08:28:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"tl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-tl | 15 | null | transformers | 9,457 | ---
language:
- es
- tl
tags:
- translation
license: apache-2.0
---
### spa-tgl
* source group: Spanish
* target group: Tagalog
* OPUS readme: [spa-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.tgl | 24.7 | 0.538 |
### System Info:
- hf_name: spa-tgl
- source_languages: spa
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'tl']
- src_constituents: {'spa'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-tgl/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: tgl
- short_pair: es-tl
- chrF2_score: 0.5379999999999999
- bleu: 24.7
- brevity_penalty: 1.0
- ref_len: 4422.0
- src_name: Spanish
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: tl
- prefer_old: False
- long_pair: spa-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-tvl | dc0caf7004cf3005068fd8e1a3069a74bd32d904 | 2021-09-09T21:45:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"tvl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-tvl | 15 | null | transformers | 9,458 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-tvl
* source languages: es
* target languages: tvl
* OPUS readme: [es-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tvl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tvl | 28.3 | 0.464 |
|
Helsinki-NLP/opus-mt-fi-nl | c73058f4a1d704b0cd150b8bb2daaf3bcec7cc62 | 2021-09-09T21:49:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-nl | 15 | null | transformers | 9,459 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-nl
* source languages: fi
* target languages: nl
* OPUS readme: [fi-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.nl | 30.5 | 0.557 |
|
Helsinki-NLP/opus-mt-fi-pon | a77e72b770a53dcfc98a7080aa71d49ae5f5291b | 2021-09-09T21:50:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"pon",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-pon | 15 | null | transformers | 9,460 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-pon
* source languages: fi
* target languages: pon
* OPUS readme: [fi-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.pon | 23.7 | 0.475 |
|
Helsinki-NLP/opus-mt-fi-tw | 233d33e2877c2a05e745c0d16fb1eaa39e0b0b19 | 2021-09-09T21:51:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"tw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-tw | 15 | null | transformers | 9,461 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-tw
* source languages: fi
* target languages: tw
* OPUS readme: [fi-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.tw | 29.2 | 0.504 |
|
Helsinki-NLP/opus-mt-fr-hu | 9c28a82b5b2c2ad7ceb7082d4996a39d4ea18839 | 2021-09-09T21:54:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"hu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-hu | 15 | null | transformers | 9,462 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-hu
* source languages: fr
* target languages: hu
* OPUS readme: [fr-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.hu | 41.3 | 0.629 |
|
Helsinki-NLP/opus-mt-gaa-de | 9841f10ed27c405ea99e3fc17d9b04ea901cc16d | 2021-09-09T21:58:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gaa",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gaa-de | 15 | null | transformers | 9,463 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gaa-de
* source languages: gaa
* target languages: de
* OPUS readme: [gaa-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.de | 23.3 | 0.438 |
|
Helsinki-NLP/opus-mt-gaa-fi | 26934ba91ba7db634fbcc603d78e94ddc3d302f1 | 2021-09-09T21:58:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gaa",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gaa-fi | 15 | null | transformers | 9,464 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-gaa-fi
* source languages: gaa
* target languages: fi
* OPUS readme: [gaa-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fi | 26.4 | 0.498 |
|
Helsinki-NLP/opus-mt-he-fi | 4b31cd72be66646814d83bc37650d6c44f959a86 | 2021-09-09T22:00:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"he",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-he-fi | 15 | null | transformers | 9,465 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-he-fi
* source languages: he
* target languages: fi
* OPUS readme: [he-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.fi | 23.3 | 0.492 |
|
Helsinki-NLP/opus-mt-hr-es | f4ca116c634fa05e5bbe7bad0a65f6740a3b7d1c | 2021-09-09T22:10:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hr",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hr-es | 15 | null | transformers | 9,466 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hr-es
* source languages: hr
* target languages: es
* OPUS readme: [hr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.es | 27.9 | 0.498 |
|
Helsinki-NLP/opus-mt-id-fi | b14365126d6c908803119e0596368048b54e1cd0 | 2021-09-09T22:11:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-id-fi | 15 | null | transformers | 9,467 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-id-fi
* source languages: id
* target languages: fi
* OPUS readme: [id-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.id.fi | 27.4 | 0.522 |
|
Helsinki-NLP/opus-mt-iso-en | 9f2814bcff0ed0eac0900523d9ea7af8a9c291a7 | 2021-09-09T22:12:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"iso",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-iso-en | 15 | null | transformers | 9,468 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-iso-en
* source languages: iso
* target languages: en
* OPUS readme: [iso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.en | 35.5 | 0.506 |
|
Helsinki-NLP/opus-mt-lua-fi | 090f03999b05affad136c4419b14dd638cadf39d | 2021-09-10T13:56:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lua",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lua-fi | 15 | null | transformers | 9,469 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-fi
* source languages: lua
* target languages: fi
* OPUS readme: [lua-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.fi | 23.5 | 0.450 |
|
Helsinki-NLP/opus-mt-sal-en | 0d8861d4c529b7055c3127a4d832a5b4c13c8131 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sal",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sal-en | 15 | null | transformers | 9,470 | ---
language:
- sal
- en
tags:
- translation
license: apache-2.0
---
### sal-eng
* source group: Salishan languages
* target group: English
* OPUS readme: [sal-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md)
* model: transformer
* source language(s): shs_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip)
* test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt)
* test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 38.7 | 0.572 |
| Tatoeba-test.shs.eng | 2.2 | 0.097 |
| Tatoeba-test.shs-eng.shs.eng | 2.2 | 0.097 |
### System Info:
- hf_name: sal-eng
- source_languages: sal
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sal', 'en']
- src_constituents: {'shs_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt
- src_alpha3: sal
- tgt_alpha3: eng
- short_pair: sal-en
- chrF2_score: 0.09699999999999999
- bleu: 2.2
- brevity_penalty: 0.8190000000000001
- ref_len: 222.0
- src_name: Salishan languages
- tgt_name: English
- train_date: 2020-07-14
- src_alpha2: sal
- tgt_alpha2: en
- prefer_old: False
- long_pair: sal-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sl-es | b954a6f496ba57beeddcefaca4585c1dd13d5d80 | 2021-09-10T14:03:39.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sl-es | 15 | null | transformers | 9,471 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sl-es
* source languages: sl
* target languages: es
* OPUS readme: [sl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.es | 26.3 | 0.483 |
|
Helsinki-NLP/opus-mt-zh-uk | 0b56f660a08577efe742068f467cc97d8c138bb0 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-uk | 15 | null | transformers | 9,472 | ---
language:
- zh
- uk
tags:
- translation
license: apache-2.0
---
### zho-ukr
* source group: Chinese
* target group: Ukrainian
* OPUS readme: [zho-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Kana cmn_Latn cmn_Yiii yue_Bopo yue_Hang yue_Hani yue_Hira yue_Kana
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ukr | 10.4 | 0.259 |
### System Info:
- hf_name: zho-ukr
- source_languages: zho
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'uk']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt
- src_alpha3: zho
- tgt_alpha3: ukr
- short_pair: zh-uk
- chrF2_score: 0.259
- bleu: 10.4
- brevity_penalty: 0.9059999999999999
- ref_len: 9193.0
- src_name: Chinese
- tgt_name: Ukrainian
- train_date: 2020-06-16
- src_alpha2: zh
- tgt_alpha2: uk
- prefer_old: False
- long_pair: zho-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Intel/bert-base-uncased-mnli-sparse-70-unstructured | f43011e5ac09d8ab1aae35dd16d343b958e74dac | 2021-05-24T17:47:03.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | false | Intel | null | Intel/bert-base-uncased-mnli-sparse-70-unstructured | 15 | null | transformers | 9,473 | ---
language: en
---
# Sparse BERT base model fine tuned to MNLI (uncased)
Fine tuned sparse BERT base to MNLI (GLUE Benchmark) task from [bert-base-uncased-sparse-70-unstructured](https://huggingface.co/Intel/bert-base-uncased-sparse-70-unstructured).
<br><br>
Note: This model requires `transformers==2.10.0`
## Evaluation Results
Matched: 82.5%
Mismatched: 83.3%
This model can be further fine-tuned to other tasks and achieve the following evaluation results:
| Task | QQP (Acc/F1) | QNLI (Acc) | SST-2 (Acc) | STS-B (Pears/Spear) | SQuADv1.1 (Acc/F1) |
|------|--------------|------------|-------------|---------------------|--------------------|
| | 90.2/86.7 | 90.3 | 91.5 | 88.9/88.6 | 80.5/88.2 |
|
Irina/trans_cyoa_GPT3Medium | 286d7f6c9495a8d3a9a78b4d05e08a04ac4fc59f | 2021-11-14T16:58:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Irina | null | Irina/trans_cyoa_GPT3Medium | 15 | null | transformers | 9,474 | Entry not found |
Itcast/bert-base-cnc | 775a1b4495ea1f36cc6abe9b0f5d819fc445aa22 | 2021-05-18T21:09:34.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Itcast | null | Itcast/bert-base-cnc | 15 | null | transformers | 9,475 | Entry not found |
JAlexis/PruebaBert | e4700db931182e3f15c035445754705fcf878437 | 2022-02-25T13:58:51.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"transformers",
"autotrain_compatible"
]
| question-answering | false | JAlexis | null | JAlexis/PruebaBert | 15 | null | transformers | 9,476 | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
- cord19
metrics:
- f1
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 9
max_seq_len = max_length
learning_rate = AdamW: 1e-5
```
|
JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan | 7ad8e3da1280e25299b4f8137afb35a3a4b9e7cd | 2021-10-10T09:50:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"ca",
"transformers",
"catalan"
]
| text-classification | false | JonatanGk | null | JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan | 15 | 1 | transformers | 9,477 | ---
language: ca
tags:
- "catalan"
metrics:
- accuracy
widget:
- text: "Ets més petita que un barrufet!!"
- text: "Ets tan lletja que et donaven de menjar per sota la porta."
---
# roberta-base-ca-finetuned-cyberbullying-catalan
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9665
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at [roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish)
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-ciberbullying-catalan"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Des que et vaig veure m'en vaig enamorar de tu."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9996786117553711}]
bullying_analysis(
"Ets tan lletja que et donaven de menjar per sota la porta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9927878975868225}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(CATALAN).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
|
Lysa/subheading_generator_nl | 63b2ded0e36c0c1c0bce8f92012d97c47bafbb66 | 2021-06-11T21:15:39.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Lysa | null | Lysa/subheading_generator_nl | 15 | null | transformers | 9,478 | Entry not found |
MoritzLaurer/xtremedistil-l6-h256-mnli-fever-anli-ling-binary | 9fa230d1a8490cce78522f02ea154ad67f49ef70 | 2022-02-08T21:37:01.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:anli",
"dataset:fever",
"dataset:lingnli",
"arxiv:2104.07179",
"transformers",
"zero-shot-classification"
]
| zero-shot-classification | false | MoritzLaurer | null | MoritzLaurer/xtremedistil-l6-h256-mnli-fever-anli-ling-binary | 15 | null | transformers | 9,479 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
datasets:
- multi_nli
- anli
- fever
- lingnli
pipeline_tag: zero-shot-classification
---
# xtremedistil-l6-h256-mnli-fever-anli-ling-binary
## Model description
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
The base model is [xtremedistil-l6-h256-uncased from Microsoft](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased).
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/xtremedistil-l6-h256-mnli-fever-anli-ling-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "not_entailment"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on 782 357 hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
### Training procedure
xtremedistil-l6-h256-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the binary test sets for MultiNLI, ANLI, LingNLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
dataset | mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
--------|---------|----------|---------|----------|----------|------
accuracy | 0.897 | 0.898 | 0.861 | 0.607 | 0.62 | 0.827
speed (text/sec, GPU Tesla P100, 128 batch) | 1490 | 1485 | 760 | 1186 | 1062 | 1791
## Limitations and bias
Please consult the original paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that the model was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. |
NDugar/1epochv3 | 0d1323e84ae4a91f657975f83785f9afc66f2aa0 | 2021-11-30T20:05:36.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"deberta-v3",
"deberta-v2`",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
]
| zero-shot-classification | false | NDugar | null | NDugar/1epochv3 | 15 | null | transformers | 9,480 | ---
language: en
tags:
- deberta-v3
- deberta-v2`
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
``` |
NYTK/summarization-nol-bart-hungarian | 7efa234c7d823f5fb1a2779cb2c642994717589c | 2022-02-14T13:27:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"hu",
"transformers",
"summarization",
"license:gpl",
"autotrain_compatible"
]
| summarization | false | NYTK | null | NYTK/summarization-nol-bart-hungarian | 15 | null | transformers | 9,481 | ---
language:
- hu
tags:
- summarization
license: gpl
metrics:
- rouge
widget:
- text: "A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert igazgató . 2010-ben alakult ki az új - három Amszterdamból származó nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes vemhesség után - október és március között vakon és szőrtelenül jönnek a világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket , hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést használnak a kolónián belül ."
---
# Hungarian Abstractive Summarization BART model
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- BART base model (see Results Table - bold):
- Pretrained on Webcorpus 2.0
- Finetuned NOL corpus (nol.hu)
- Segments: 397,343
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
- max_source_length = 512
- max_target_length = 256
## Results
| Model | HI | NOL |
| ------------- | ------------- | ------------- |
| BART-base-512 | 30.18/13.86/22.92 | **46.48/32.40/39.45** |
| BART-base-1024| 31.86/14.59/23.79 | 47.01/32.91/39.97 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {{Yang Zijian Győző}},
pages = {15--29}
}
``` |
NbAiLab/nb-roberta-base | 32d3881e0f3bc87f28330d66833a9e5915e8abce | 2021-12-01T07:42:23.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"no",
"transformers",
"norwegian",
"license:cc-by-4.0",
"autotrain_compatible"
]
| fill-mask | false | NbAiLab | null | NbAiLab/nb-roberta-base | 15 | null | transformers | 9,482 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- roberta
pipeline_tag: fill-mask
widget:
- text: På biblioteket kan du <mask> en bok.
- text: Dette er et <mask> eksempel.
- text: Av og til kan en språkmodell gi et <mask> resultat.
- text: Som ansat får du <mask> for at bidrage til borgernes adgang til dansk kulturarv, til forskning og til samfundets demokratiske udvikling.
---
norwegian-roberta-base but with higher learning rate and batch size |
OnsElleuch/logisgenerator | 097814d1541082ca59bb2fc6af093859597df233 | 2021-08-11T16:29:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WebNLG",
"dataset:Dart",
"transformers",
"keytotext",
"k2t",
"Keywords to Sentences",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | OnsElleuch | null | OnsElleuch/logisgenerator | 15 | null | transformers | 9,483 | ---
language: "en"
thumbnail: "Keywords to Sentences"
tags:
- keytotext
- k2t
- Keywords to Sentences
license: "MIT"
datasets:
- WebNLG
- Dart
metrics:
- NLG
model-index:
- name: logisgenerator
---
#keytotext
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
[](https://github.com/gagan3012/keytotext#api)
[](https://hub.docker.com/r/gagan30/keytotext)
[](https://huggingface.co/models?filter=keytotext)
[](https://keytotext.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models |
Rolv-Arild/xls-r-300m-npsc-4 | 00a6ac8768ca33d9a1956ebccbe0aabedf3a161a | 2022-02-04T16:36:33.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Rolv-Arild | null | Rolv-Arild/xls-r-300m-npsc-4 | 15 | null | transformers | 9,484 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Wer: 0.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4527 | 0.28 | 250 | 4.0144 | 1.0 |
| 3.1828 | 0.56 | 500 | 3.1369 | 1.0 |
| 2.9927 | 0.85 | 750 | 3.0183 | 1.0 |
| 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 |
| 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 |
| 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 |
| 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 |
| 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 |
| 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 |
| 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 |
| 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 |
| 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 |
| 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 |
| 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 |
| 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 |
| 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 |
| 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 |
| 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 |
| 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 |
| 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 |
| 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 |
| 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 |
| 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 |
| 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 |
| 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 |
| 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 |
| 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 |
| 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 |
| 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 |
| 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 |
| 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 |
| 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 |
| 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 |
| 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 |
| 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 |
| 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 |
| 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 |
| 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 |
| 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 |
| 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 |
| 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 |
| 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 |
| 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 |
| 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 |
| 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 |
| 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 |
| 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 |
| 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 |
| 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 |
| 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 |
| 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 |
| 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 |
| 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 |
| 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 |
| 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 |
| 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 |
| 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 |
| 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 |
| 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 |
| 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 |
| 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 |
| 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 |
| 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 |
| 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 |
| 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 |
| 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 |
| 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 |
| 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 |
| 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 |
| 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
|
SEBIS/code_trans_t5_small_source_code_summarization_python | 094368e02efa964e165d7d6391de61b2230acd7e | 2021-06-23T10:22:03.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_python | 15 | null | transformers | 9,485 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization python dataset.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
Sam2021/xlm_rober_base_finetuned_squd_v1 | c90b0ab25f74efdd59c583c055a6f9815ee56ff4 | 2021-08-15T16:06:19.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Sam2021 | null | Sam2021/xlm_rober_base_finetuned_squd_v1 | 15 | null | transformers | 9,486 | Entry not found |
SarahhhUwU/DialoGPT-small-ally | 04f52e21b9562bbdc68f1c43e10a4731c8a94f71 | 2021-08-25T08:36:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | SarahhhUwU | null | SarahhhUwU/DialoGPT-small-ally | 15 | null | transformers | 9,487 | ---
tags:
- conversational
---
#Ally DialoGPT Model |
UBC-NLP/IndT5 | c0db442ef5b0f8d0ef5faf3a6584f91d019ae413 | 2021-08-30T22:03:01.000Z | [
"pytorch",
"t5",
"transformers"
]
| null | false | UBC-NLP | null | UBC-NLP/IndT5 | 15 | null | transformers | 9,488 | # IndT5: A Text-to-Text Transformer for 10 Indigenous Languages
<img src="https://huggingface.co/UBC-NLP/IndT5/raw/main/IND_langs_large7.png" alt="drawing" width="45%" height="45%" align="right"/>
In this work, we introduce IndT5, the first Transformer language model for Indigenous languages. To train IndT5, we build IndCorpu, a new corpus for 10 Indigenous languages and Spanish.
# IndT5
We train an Indigenous language model adopting the unified and flexible
text-to-text transfer Transformer (T5) approach. T5 treats every
text-based language task as a “text-to-text" problem, taking text format
as input and producing new text format as output. T5 is essentially an
encoder-decoder Transformer, with the encoder and decoder similar in
configuration and size to a BERT<sub>Base</sub> but with some
architectural modifications. Modifications include applying a
normalization layer before a sub-block and adding a pre-norm (i.e.,
initial input to the sub-block output).
# IndCourpus
We build IndCorpus, a collection of 10 Indigeous languages and Spanish comprising 1.17GB of text, from both Wikipedia and the Bible.
### Data size and number of sentences in monolingual dataset (collected from Wikipedia and Bible)
| **Target Language** | **Wiki Size (MB)** | **Wiki #Sentences** | **Bible Size (MB)** | **Bible #Sentences**|
|-------------------|------------------|-------------------|------------------------|-|
|Hñähñu | - | - | 1.4 | 7.5K |
|Wixarika | - | - | 1.3 | 7.5K|
|Nahuatl | 5.8 | 61.1K | 1.5 | 7.5K|
|Guarani | 3.7 | 28.2K | 1.3 | 7.5K |
|Bribri | - | - | 1.5 | 7.5K |
|Rarámuri | - | - | 1.9 | 7.5K |
|Quechua | 5.9 | 97.3K | 4.9 | 31.1K |
|Aymara | 1.7 | 32.9K | 5 | 30.7K|
|Shipibo-Konibo | - | - | 1 | 7.9K |
|Asháninka | - | - | 1.4 | 7.8K |
|Spanish | 1.13K | 5M | - | - |
|Total | 1.15K | 5.22M | 19.8 | 125.3K|
# Github
More details about our model can be found here: https://github.com/UBC-NLP/IndT5
# BibTex
```bibtex
@inproceedings{nagoudi-etal-2021-indt5,
title = "{I}nd{T}5: A Text-to-Text Transformer for 10 Indigenous Languages",
author = "Nagoudi, El Moatez Billah and Chen, Wei-Rui and Abdul-Mageed, Muhammad and Cavusoglu, Hasan",
booktitle = "Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.americasnlp-1.30",
doi = "10.18653/v1/2021.americasnlp-1.30",
pages = "265--271"
}
```
|
Violeta/ArmBERTa_Model | 8db5dcefa79cd31a41626d399361c63606c7cb3a | 2021-05-20T12:31:26.000Z | [
"pytorch",
"jax",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | Violeta | null | Violeta/ArmBERTa_Model | 15 | null | transformers | 9,489 | Entry not found |
Yanjie/message-intent | 86855f912587f2aca42207022aba888bd207a222 | 2022-03-21T18:08:08.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | Yanjie | null | Yanjie/message-intent | 15 | 1 | transformers | 9,490 | This is the concierge intent model. Fined tuned on DistilBert uncased model. |
ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation | 908433206be097de9d1303b23679c5c3eff296cf | 2021-12-14T17:48:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"transformers",
"Question Generation",
"autotrain_compatible"
]
| text2text-generation | false | ZhangCheng | null | ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation | 15 | 1 | transformers | 9,491 | ---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5v1.1 <context> Cheng fine-tuned T5v1.1 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5v1.1 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5v1.1-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer:str, context:str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids'].to(self.device)
attention_mask = encoding['attention_mask'].to(self.device)
outputs = self.model.generate(
input_ids = input_ids,
attention_mask = attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens = True,
clean_up_tokenization_spaces = True
)
return {'question': question, 'answer': answer}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5v1.1 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5v1.1 on SQuAD?
```
|
aditi2222/automatic_title_generation | 07b9632d71a7d6d9dcb8a00e9eac478db96c6116 | 2022-01-23T18:01:45.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | aditi2222 | null | aditi2222/automatic_title_generation | 15 | null | transformers | 9,492 | Entry not found |
airKlizz/distilbart-12-3-multi-combine-wiki-news | fc2460258985430c8a754644b0a8b81ebcee7eb7 | 2020-08-26T10:25:17.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | airKlizz | null | airKlizz/distilbart-12-3-multi-combine-wiki-news | 15 | null | transformers | 9,493 | Entry not found |
airKlizz/gbert-base-germeval21-toxic | 8fb266deb431a8fedb9845f6f249bb4f5cfafcb9 | 2021-07-12T17:45:46.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | airKlizz | null | airKlizz/gbert-base-germeval21-toxic | 15 | null | transformers | 9,494 | Entry not found |
akdeniz27/mbert-base-albanian-cased-ner | 9ac7ea70630ba1190fd53bf34cf3572cb692364a | 2021-10-20T10:03:06.000Z | [
"pytorch",
"bert",
"token-classification",
"sq",
"transformers",
"autotrain_compatible"
]
| token-classification | false | akdeniz27 | null | akdeniz27/mbert-base-albanian-cased-ner | 15 | 1 | transformers | 9,495 | ---
language: sq
widget:
- text: "Varianti AY.4.2 është më i lehtë për t'u transmetuar, thotë Francois Balu, drejtor i Institutit të Gjenetikës në Londër."
---
# Albanian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "bert-base-multilingual-cased"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "bert-base-multilingual-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9719268816143276
* f1: 0.9192366826444787
* precision: 0.9171629669734704
* recall: 0.9213197969543148
|
allenai/unifiedqa-v2-t5-small-1363200 | e8b5669a71c0f8fb7560e9a967c37470d204bb4c | 2022-02-21T23:12:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | allenai | null | allenai/unifiedqa-v2-t5-small-1363200 | 15 | null | transformers | 9,496 | # Further details: https://github.com/allenai/unifiedqa |
andi611/distilbert-base-uncased-ner-agnews | 2832236d07511bab77ecc7ab3662bb7c046efbc3 | 2021-08-02T01:25:13.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:ag_news",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | false | andi611 | null | andi611/distilbert-base-uncased-ner-agnews | 15 | 1 | transformers | 9,497 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
model_index:
- name: distilbert-base-uncased-agnews
results:
- dataset:
name: ag_news
type: ag_news
args: default
metric:
name: Accuracy
type: accuracy
value: 0.9473684210526315
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-agnews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1652
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1916 | 1.0 | 3375 | 0.1741 | 0.9412 |
| 0.123 | 2.0 | 6750 | 0.1631 | 0.9483 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
baykenney/bert-large-gpt2detector-topk40 | 90c370d72632ab59769908a16adbcf021ea00fe8 | 2021-05-19T12:19:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | baykenney | null | baykenney/bert-large-gpt2detector-topk40 | 15 | null | transformers | 9,498 | Entry not found |
bergum/xtremedistil-l6-h384-emotion | f7116c84b9db139008882ed1620eaf0ff520e6a8 | 2022-07-14T08:30:26.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | bergum | null | bergum/xtremedistil-l6-h384-emotion | 15 | null | transformers | 9,499 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: xtremedistil-l6-h384-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
---
# xtremedistil-l6-h384-emotion
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.928
This model can be quantized to int8 and retain accuracy
- Accuracy 0.912
<pre>
import transformers
import transformers.convert_graph_to_onnx as onnx_convert
from pathlib import Path
pipeline = transformers.pipeline("text-classification",model=model,tokenizer=tokenizer)
onnx_convert.convert_pytorch(pipeline, opset=11, output=Path("xtremedistil-l6-h384-emotion.onnx"), use_external_format=False)
from onnxruntime.quantization import quantize_dynamic, QuantType
quantize_dynamic("xtremedistil-l6-h384-emotion.onnx", "xtremedistil-l6-h384-emotion-int8.onnx",
weight_type=QuantType.QUInt8)
</pre>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- num_epochs: 14
### Training results
<pre>
Epoch Training Loss Validation Loss Accuracy
1 No log 0.960511 0.689000
2 No log 0.620671 0.824000
3 No log 0.435741 0.880000
4 0.797900 0.341771 0.896000
5 0.797900 0.294780 0.916000
6 0.797900 0.250572 0.918000
7 0.797900 0.232976 0.924000
8 0.277300 0.216347 0.924000
9 0.277300 0.202306 0.930500
10 0.277300 0.192530 0.930000
11 0.277300 0.192500 0.926500
12 0.181700 0.187347 0.928500
13 0.181700 0.185896 0.929500
14 0.181700 0.185154 0.928000
</pre> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.