modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
l3cube-pune/hing-roberta | 2413811a1748c9b290a150b453056f2ac8aff498 | 2022-06-26T15:12:18.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"hi",
"en",
"dataset:L3Cube-HingCorpus",
"arxiv:2204.08398",
"transformers",
"codemix",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/hing-roberta | 4 | null | transformers | 19,100 | ---
license: cc-by-4.0
language:
- hi
- en
tags:
- hi
- en
- codemix
datasets:
- L3Cube-HingCorpus
---
## HingRoBERTa
HingRoBERTa is a Hindi-English code-mixed RoBERTa model trained on roman text. It is an xlm-RoBERTa model fine-tuned on L3Cube-HingCorpus.
<br>
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398)
```
@InProceedings{nayak-joshi:2022:WILDRE6,
author = {Nayak, Ravindra and Joshi, Raviraj},
title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {7--12}
}
``` |
azaninello/distilgpt2-finetuned-shroomstoy | 72cd4392f88fa875763138be9faf9c4ab6fd53bd | 2022-03-04T19:13:30.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | azaninello | null | azaninello/distilgpt2-finetuned-shroomstoy | 4 | null | transformers | 19,101 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-shroomstoy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-shroomstoy
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 10 | 4.1207 |
| No log | 2.0 | 20 | 4.1009 |
| No log | 3.0 | 30 | 4.0958 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
nielsr/segformer-b0-finetuned-segments-sidewalk | 3805bb23b5a8283ad0c21863b7dea9ebb2969ab5 | 2022-03-05T09:39:11.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | nielsr | null | nielsr/segformer-b0-finetuned-segments-sidewalk | 4 | null | transformers | 19,102 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5679
- Miou: 0.2769
- Macc: 0.3331
- Overall Accuracy: 0.8424
- Per Category Iou: [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0]
- Per Category Accuracy: [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Miou | Macc | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.357 | 1.0 | 400 | 1.0006 | 0.1632 | 0.2069 | 0.7524 | [nan, 0.5642795884663824, 0.7491853309192827, 0.0, 0.40589649630192104, 0.02723606910696284, nan, 0.0002207740938439576, 0.0, 0.0, 0.6632462867093903, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5671699281129761, 0.0, 0.0009207911027492868, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.7507253434892517, 0.6157793573905029, 0.8774768871968204, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6839993330882016, 0.9786792586618772, 0.0, 0.4818162160949784, 0.02785198456498826, nan, 0.00022133459131411787, 0.0, 0.0, 0.9043689536433023, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8606078323791991, 0.0, 0.0009210330367246509, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.895198618615298, 0.8549807032886052, 0.9328734839751688, 0.0, 0.0, 0.0, 0.0] |
| 1.6346 | 2.0 | 800 | 0.7856 | 0.1903 | 0.2334 | 0.7917 | [nan, 0.6276046255936906, 0.8379492348238635, 0.0, 0.5220035981992285, 0.19441920935217594, nan, 0.16135703555333, 0.0, 0.0, 0.7357165628674137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.567598980063164, 0.0, 0.07867871139133086, 0.0, 0.0, nan, 0.0, 0.02123705398363847, 0.0, 0.0, 0.7917172051343153, 0.6589515948064048, 0.8916684207946344, 0.0, 0.0, 0.00013685918191589503, 0.0] | [nan, 0.8610263337355926, 0.9499345560017969, 0.0, 0.5908796687797819, 0.2144081438468206, nan, 0.1813236746419022, 0.0, 0.0, 0.8825551027577866, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9239907140298015, 0.0, 0.08495225520298297, 0.0, 0.0, nan, 0.0, 0.021302829364985724, 0.0, 0.0, 0.9258397010509258, 0.8834861376443207, 0.9489131468773239, 0.0, 0.0, 0.0001372777815910495, 0.0] |
| 0.659 | 3.0 | 1200 | 0.6798 | 0.2215 | 0.2687 | 0.8107 | [nan, 0.6728474586764454, 0.8404607924530816, 0.21147709475332813, 0.5407350347311378, 0.23535489130104167, nan, 0.3087159264982809, 0.0060319580742948155, 0.0, 0.7331305064022374, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.6378031991744924, 0.0, 0.35289337122777764, 6.24997656258789e-05, 0.0, nan, 0.0, 0.14698390926256938, 0.0, 0.0, 0.8019042204623998, 0.669283249725758, 0.8928145424856038, 0.0, 0.0, 0.03847722460691187, 0.0] | [nan, 0.866012011452706, 0.9627112260298595, 0.21236715482371135, 0.5645869262075475, 0.2750610095322395, nan, 0.3857655597748765, 0.0060319580742948155, 0.0, 0.939196440844118, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8380282443529743, 0.0, 0.5749902063170915, 6.256068386334744e-05, 0.0, nan, 0.0, 0.1605725590139305, 0.0, 0.0, 0.9212803460870584, 0.8870298583701837, 0.959700359744241, 0.0, 0.0, 0.04453994364914478, 0.0] |
| 0.5481 | 4.0 | 1600 | 0.5999 | 0.2522 | 0.2998 | 0.8312 | [nan, 0.7078353465279917, 0.8661728761172196, 0.3857324719136883, 0.6338278880825696, 0.3440050078187208, nan, 0.35980405625532347, 0.23875867241702606, 0.0, 0.773703347865372, 0.0, 0.0, 0.0, 0.0, 0.0004931363471679884, 0.0, 0.0, 0.6554146448850521, 0.0, 0.367673493717809, 0.03089804641909161, 0.0, nan, 0.0, 0.21529017459808872, 0.0, 0.0, 0.818951849158376, 0.7007504838794707, 0.9053929635423027, 0.0, 0.0, 0.06626212301200333, 0.0] | [nan, 0.8955207784307155, 0.9536263694097721, 0.39712577675621036, 0.6989299616008556, 0.4248959179453637, nan, 0.42984959564233455, 0.26168627652468784, 0.0, 0.9055166364779607, 0.0, 0.0, 0.0, 0.0, 0.0004932058379466533, 0.0, 0.0, 0.8632164276000204, 0.0, 0.6365580872107307, 0.031401709658368616, 0.0, nan, 0.0, 0.2497286263775161, 0.0, 0.0, 0.9296676429517725, 0.8858954297713482, 0.9555756265860916, 0.0, 0.0, 0.0750792276952902, 0.0] |
| 0.7855 | 5.0 | 2000 | 0.5679 | 0.2769 | 0.3331 | 0.8424 | [nan, 0.7174911859423314, 0.8790751054409742, 0.6065232798410057, 0.6975274018055722, 0.3486407385349508, nan, 0.40093167116703843, 0.28779837903852556, 0.0, 0.7870339041746186, 0.0, 0.0, 0.0, 0.0, 0.1464360606454247, 0.0, 0.0, 0.6770283275082656, 0.0, 0.338555175257431, 0.14697310016578427, 0.0, nan, 0.0, 0.27163002251763635, 0.0, 0.0, 0.8257437911843676, 0.7169333376341568, 0.9108105550493353, 0.0, 0.0, 0.1016801552778885, 0.0] | [nan, 0.9199960254104915, 0.9327745517652714, 0.7304629327758765, 0.7378309547498484, 0.45295941407150275, nan, 0.5188608021128075, 0.5327441812670195, 0.0, 0.9353764765979435, 0.0, 0.0, 0.0, 0.0, 0.1588525415198792, 0.0, 0.0, 0.9238854794385364, 0.0, 0.4400394213522207, 0.15130051149615126, 0.0, nan, 0.0, 0.3570096986572905, 0.0, 0.0, 0.9359897980968498, 0.8570458108260572, 0.9549583230619891, 0.0, 0.0, 0.11786971668879294, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ietz/token-paraphrase-MiniLM-L6-v2 | 248c8e845621ffd5b8c92d3207da3f2083237b8a | 2022-05-01T19:28:23.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | ietz | null | ietz/token-paraphrase-MiniLM-L6-v2 | 4 | null | transformers | 19,103 | ---
license: apache-2.0
---
|
mp6kv/feedback_intent_test | 43fb1d401b619276accff0080a61434b9e8aebbf | 2022-03-24T18:42:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mp6kv | null | mp6kv/feedback_intent_test | 4 | null | transformers | 19,104 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: feedback_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# feedback_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these three categories.
- Positive : Encouraging the student that they are correct and on the right track
- Neutral : Mixed feedback or feedback that asks for more information
- Negative : Informing the student they need to change direction or that they are not correct
Takes a user input of string text and classifies it according to one of three categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/feedback_intent_test")
output = classifier("great job, you're getting it!")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Britain/DialoGPT-small-DanyBotTwo | bd02fb5aa9c36d721e181801f41a1ccf79b22551 | 2022-03-06T07:05:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Britain | null | Britain/DialoGPT-small-DanyBotTwo | 4 | null | transformers | 19,105 | ---
tags:
- conversational
---
# DanyBot |
nepp1d0/smiles-target-interaction | 39985d394cb0f47eb45038f16d82b7473c9a69f4 | 2022-03-10T21:31:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | nepp1d0 | null | nepp1d0/smiles-target-interaction | 4 | null | transformers | 19,106 | Entry not found |
sanchit-gandhi/wav2vec2-2-rnd | 6d4ae6edb3d69e5c8a1b73e72c0af8d804e90977 | 2022-03-08T22:30:32.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd | 4 | null | transformers | 19,107 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9599
- Wer: 0.1442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1431 | 1.68 | 1500 | 6.0870 | 1.4277 |
| 5.498 | 3.36 | 3000 | 5.5505 | 1.6318 |
| 3.575 | 5.04 | 4500 | 3.7856 | 0.6683 |
| 1.7532 | 6.73 | 6000 | 2.4603 | 0.3576 |
| 1.6379 | 8.41 | 7500 | 1.8847 | 0.2932 |
| 1.3145 | 10.09 | 9000 | 1.5027 | 0.2222 |
| 0.8389 | 11.77 | 10500 | 1.2637 | 0.1855 |
| 0.9239 | 13.45 | 12000 | 1.1424 | 0.1683 |
| 0.6666 | 15.13 | 13500 | 1.0562 | 0.1593 |
| 0.5258 | 16.82 | 15000 | 0.9911 | 0.1489 |
| 0.4733 | 18.5 | 16500 | 0.9599 | 0.1442 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
aytugkaya/distilbert-base-uncased-finetuned-emotion | 23b71949d773850c7857d16c803ec5b1913309ff | 2022-03-13T16:33:50.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | aytugkaya | null | aytugkaya/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 19,108 | Entry not found |
armageddon/roberta-large-squad2-covid-qa-deepset | c019efe9e88b8426fb6263a711a3c286b9fa60e7 | 2022-03-01T01:48:21.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:covid_qa_deepset",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | armageddon | null | armageddon/roberta-large-squad2-covid-qa-deepset | 4 | null | transformers | 19,109 | ---
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: covid_qa_analysis_roberta-large-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-large-squad2
This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
malteos/scincl-wol | afba171adcb7b89c84026a8b9e97d786e1662a07 | 2022-03-07T10:43:21.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | malteos | null | malteos/scincl-wol | 4 | null | transformers | 19,110 | ---
license: mit
---
# SciNCL based on training data w/o SciDocs leakage.
See [malteos/scincl](https://huggingface.co/malteos/scincl) for more details. |
Manauu17/roberta_sentiments_es | a90e83edff9eaedbe2b12fc8541d777db6e3f7ce | 2022-03-07T20:10:33.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Manauu17 | null | Manauu17/roberta_sentiments_es | 4 | 1 | transformers | 19,111 | # roberta_sentiments_es , a Sentiment Analysis model for Spanish sentences
This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis. This model currently supports Spanish sentences
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
import pandas as pd
from scipy.special import softmax
MODEL = 'Manauu17/roberta_sentiments_es_en'
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PyTorch
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = ['@usuario siempre es bueno la opinión de un playo',
'Bendito año el que me espera']
encoded_input = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
output = model(**encoded_input)
scores = output[0].detach().numpy()
# TensorFlow
model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
text = ['La guerra no es buena para nadie.','Espero que mi jefe me de mañana libre']
encoded_input = tokenizer(text, return_tensors='tf', padding=True, truncation=True)
output = model(encoded_input)
scores = output[0].numpy()
# Results
def get_scores(model_output, labels_dict):
scores = softmax(model_output)
frame = pd.DataFrame(scores, columns=labels.values())
frame.style.highlight_max(axis=1,color="green")
return frame
```
Output:
```
# PyTorch
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
Negative Neutral Positive
0 0.000607 0.004851 0.906596
1 0.079812 0.006650 0.001484
# TensorFlow
get_scores(scores, labels_dict).style.highlight_max(axis=1, color="green")
Negative Neutral Positive
0 0.017030 0.008920 0.000667
1 0.000260 0.001695 0.971429
```
|
ctoraman/RoBERTa-TR-medium-bpe-16k | 0fd5b4ea5dbf3c768bfd2412c116285767ca43a8 | 2022-04-20T06:48:03.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-bpe-16k | 4 | null | transformers | 19,112 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium BPE 16k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is BPE. Vocabulary size is 16.7k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
daisyxie21/bert-base-uncased-8-50-0.01 | 345573f941ca44c61e9b6f48c987654910173e8f | 2022-03-09T02:13:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | daisyxie21 | null | daisyxie21/bert-base-uncased-8-50-0.01 | 4 | null | transformers | 19,113 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-8-50-0.01
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-8-50-0.01
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9219
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| No log | 1.0 | 400 | 0.9219 | 0.0 |
| 1.2047 | 2.0 | 800 | 1.8168 | 0.0 |
| 1.0707 | 3.0 | 1200 | 1.4474 | 0.0 |
| 1.0538 | 4.0 | 1600 | 1.5223 | 0.0 |
| 1.316 | 5.0 | 2000 | 0.8467 | 0.0 |
| 1.316 | 6.0 | 2400 | 1.0906 | 0.0 |
| 1.2739 | 7.0 | 2800 | 0.6851 | 0.0 |
| 1.1342 | 8.0 | 3200 | 1.3170 | 0.0 |
| 1.2572 | 9.0 | 3600 | 0.8870 | 0.0 |
| 1.0237 | 10.0 | 4000 | 1.3236 | 0.0 |
| 1.0237 | 11.0 | 4400 | 0.9025 | 0.0 |
| 0.9597 | 12.0 | 4800 | 0.7757 | 0.0 |
| 1.0946 | 13.0 | 5200 | 1.2551 | 0.0 |
| 1.0011 | 14.0 | 5600 | 1.1606 | 0.0 |
| 1.1111 | 15.0 | 6000 | 0.6040 | 0.0 |
| 1.1111 | 16.0 | 6400 | 1.4347 | 0.0 |
| 1.0098 | 17.0 | 6800 | 0.6218 | 0.0 |
| 1.0829 | 18.0 | 7200 | 0.4979 | 0.0 |
| 0.9131 | 19.0 | 7600 | 1.3040 | 0.0 |
| 0.879 | 20.0 | 8000 | 2.0309 | 0.0 |
| 0.879 | 21.0 | 8400 | 0.5150 | 0.0 |
| 0.9646 | 22.0 | 8800 | 0.4850 | 0.0 |
| 0.9625 | 23.0 | 9200 | 0.5076 | 0.0 |
| 0.9129 | 24.0 | 9600 | 1.1277 | 0.0 |
| 0.8839 | 25.0 | 10000 | 0.9403 | 0.0 |
| 0.8839 | 26.0 | 10400 | 1.6226 | 0.0 |
| 0.9264 | 27.0 | 10800 | 0.6049 | 0.0 |
| 0.7999 | 28.0 | 11200 | 0.9549 | 0.0 |
| 0.752 | 29.0 | 11600 | 0.6757 | 0.0 |
| 0.7675 | 30.0 | 12000 | 0.7320 | 0.0 |
| 0.7675 | 31.0 | 12400 | 0.8393 | 0.0 |
| 0.6887 | 32.0 | 12800 | 0.5977 | 0.0 |
| 0.7563 | 33.0 | 13200 | 0.4815 | 0.0 |
| 0.7671 | 34.0 | 13600 | 0.5457 | 0.0 |
| 0.7227 | 35.0 | 14000 | 0.7384 | 0.0 |
| 0.7227 | 36.0 | 14400 | 0.7749 | 0.0 |
| 0.7308 | 37.0 | 14800 | 0.4726 | 0.0 |
| 0.7191 | 38.0 | 15200 | 0.5069 | 0.0 |
| 0.6846 | 39.0 | 15600 | 0.4762 | 0.0 |
| 0.6151 | 40.0 | 16000 | 0.4738 | 0.0 |
| 0.6151 | 41.0 | 16400 | 0.5114 | 0.0 |
| 0.5982 | 42.0 | 16800 | 0.4866 | 0.0 |
| 0.6199 | 43.0 | 17200 | 0.4717 | 0.0 |
| 0.5737 | 44.0 | 17600 | 0.7651 | 0.0 |
| 0.5703 | 45.0 | 18000 | 0.8008 | 0.0 |
| 0.5703 | 46.0 | 18400 | 0.5391 | 0.0 |
| 0.5748 | 47.0 | 18800 | 0.5097 | 0.0 |
| 0.5297 | 48.0 | 19200 | 0.4731 | 0.0 |
| 0.4902 | 49.0 | 19600 | 0.4720 | 0.0 |
| 0.4955 | 50.0 | 20000 | 0.4748 | 0.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ScandinavianMrT/distilbert-SARC | 2d4aa446077f6026a51c09e0d74d979dad5b686e | 2022-03-09T10:29:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert-SARC | 4 | null | transformers | 19,114 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-SARC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4976
- eval_accuracy: 0.7590
- eval_runtime: 268.1875
- eval_samples_per_second: 753.782
- eval_steps_per_second: 47.113
- epoch: 1.0
- step: 50539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ctoraman/RoBERTa-TR-medium-word-7k | b8e36ac17b46309f19247663f0b09e1aa5eda0c3 | 2022-04-20T07:00:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-word-7k | 4 | null | transformers | 19,115 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Word-level 7k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 7.5k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
datarpit/toy-qa | cd848d6a2a8f631a0c4fd94be96baaee45406a4a | 2022-03-10T05:18:21.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | datarpit | null | datarpit/toy-qa | 4 | null | transformers | 19,116 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: toy-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toy-qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2410
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9537 | 1.0 | 20415 | 0.2410 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
kyleinincubated/autonlp-cat333-624217911 | b20e6ae9fcb39f462634a5d9a8c12f8c1aa31cae | 2022-03-10T03:47:17.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:kyleinincubated/autonlp-data-cat333",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | kyleinincubated | null | kyleinincubated/autonlp-cat333-624217911 | 4 | null | transformers | 19,117 | ---
tags: autonlp
language: zh
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kyleinincubated/autonlp-data-cat333
co2_eq_emissions: 2.267288583123193
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 624217911
- CO2 Emissions (in grams): 2.267288583123193
## Validation Metrics
- Loss: 0.39670249819755554
- Accuracy: 0.9098901098901099
- Macro F1: 0.7398394202169645
- Micro F1: 0.9098901098901099
- Weighted F1: 0.9073329464119164
- Macro Precision: 0.7653753530396269
- Micro Precision: 0.9098901098901099
- Weighted Precision: 0.9096917983040914
- Macro Recall: 0.7382843728794468
- Micro Recall: 0.9098901098901099
- Weighted Recall: 0.9098901098901099
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-cat333-624217911
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-cat333-624217911", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-cat333-624217911", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
kyleinincubated/autonlp-cat33-624317932 | 1e8d0d3a9973b97caa31f3bf9a9cbf39d03dd53c | 2022-03-10T06:10:56.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"dataset:kyleinincubated/autonlp-data-cat33",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | kyleinincubated | null | kyleinincubated/autonlp-cat33-624317932 | 4 | null | transformers | 19,118 | ---
tags: autonlp
language: zh
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kyleinincubated/autonlp-data-cat33
co2_eq_emissions: 1.2490471218570545
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 624317932
- CO2 Emissions (in grams): 1.2490471218570545
## Validation Metrics
- Loss: 0.5579860806465149
- Accuracy: 0.8717391304347826
- Macro F1: 0.6625543939916455
- Micro F1: 0.8717391304347827
- Weighted F1: 0.8593303742671491
- Macro Precision: 0.7214757380849891
- Micro Precision: 0.8717391304347826
- Weighted Precision: 0.8629042654788023
- Macro Recall: 0.6540187758140144
- Micro Recall: 0.8717391304347826
- Weighted Recall: 0.8717391304347826
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kyleinincubated/autonlp-cat33-624317932
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kyleinincubated/autonlp-cat33-624317932", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
amanm27/bert-base-uncased-wiki-sports | 30860ff23fe24617176912912c07556f8d6988f2 | 2022-03-10T06:53:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amanm27 | null | amanm27/bert-base-uncased-wiki-sports | 4 | null | transformers | 19,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-wiki-sports
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-wiki-sports
This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki](https://huggingface.co/amanm27/bert-base-uncased-wiki) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3589 | 1.0 | 912 | 2.0686 |
| 2.176 | 2.0 | 1824 | 2.0025 |
| 2.1022 | 3.0 | 2736 | 1.9774 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
amanm27/bert-base-uncased-wiki-sports-scouting | ea8fdd788f3c03b3d5383443a975dbd3212820bb | 2022-03-10T07:18:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | amanm27 | null | amanm27/bert-base-uncased-wiki-sports-scouting | 4 | null | transformers | 19,120 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-wiki-sports-scouting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-wiki-sports-scouting
This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki-sports](https://huggingface.co/amanm27/bert-base-uncased-wiki-sports) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 378 | 1.6816 |
| 1.9594 | 2.0 | 756 | 1.5421 |
| 1.66 | 3.0 | 1134 | 1.5022 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nabin19677/Cartman | 63536aea3648cfe61488080f665e9548b70b4d48 | 2022-03-10T10:11:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nabin19677 | null | nabin19677/Cartman | 4 | null | transformers | 19,121 | Entry not found |
muneson/xls-r-300m-sv | f2531eb112291ae83304cfdddc0d586c40bc50cc | 2022-03-11T05:48:55.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | muneson | null | muneson/xls-r-300m-sv | 4 | null | transformers | 19,122 | Entry not found |
P0intMaN/PyAutoCode | 1e2062415e4c89c06aca076af8f72d8443e28249 | 2022-04-06T15:26:28.000Z | [
"pytorch",
"tf",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | P0intMaN | null | P0intMaN/PyAutoCode | 4 | null | transformers | 19,123 | ---
license: mit
---
# PyAutoCode: GPT-2 based Python auto-code.
PyAutoCode is a cut-down python autosuggestion built on **GPT-2** *(motivation: GPyT)* model. This baby model *(trained only up to 3 epochs)* is not **"fine-tuned"** yet therefore, I highly recommend not to use it in a production environment or incorporate PyAutoCode in any of your projects. It has been trained on **112GB** of Python data sourced from the best crowdsource platform ever -- **GitHub**.
*NOTE: Increased training and fine tuning would be highly appreciated and I firmly believe that it would improve the ability of PyAutoCode significantly.*
## Some Model Features
- Built on *GPT-2*
- Tokenized with *ByteLevelBPETokenizer*
- Data Sourced from *GitHub (almost 5 consecutive days of latest Python repositories)*
- Makes use of *GPTLMHeadModel* and *DataCollatorForLanguageModelling* for training
- Newline characters are custom coded as `<N>`
## Get a Glimpse of the Model
You can make use of the **Inference API** of huggingface *(present on the right sidebar)* to load the model and check the result. Just enter any code snippet as input. Something like:
```sh
for i in range(
```
## Usage
You can use my model too!. Here's a quick tour of how you can achieve this:
Install transformers
```sh
$ pip install transformers
```
Call the API and get it to work!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("P0intMaN/PyAutoCode")
model = AutoModelForCausalLM.from_pretrained("P0intMaN/PyAutoCode")
# input: single line or multi-line. Highly recommended to use doc-strings.
inp = """import pandas"""
format_inp = inp.replace('\n', "<N>")
tokenize_inp = tokenizer.encode(format_inp, return_tensors='pt')
result = model.generate(tokenize_inp)
decode_result = tokenizer.decode(result[0])
format_result = decode_result.replace('<N>', "\n")
# printing the result
print(format_result)
```
Upon successful execution, the above should probably produce *(your results may vary when this model is fine-tuned)*
```sh
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
```
## Credits
##### *Developed as a part of a university project by [Pratheek U](https://www.github.com/P0intMaN) and [Sourav Singh](https://github.com/Sourav11902312lpu)* |
ScandinavianMrT/distilbert-SARC_withcontext | 74b2eefdc69240b48e4f0938e06d88b0be29a8ea | 2022-03-11T11:30:02.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ScandinavianMrT | null | ScandinavianMrT/distilbert-SARC_withcontext | 4 | null | transformers | 19,124 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-SARC_withcontext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-SARC_withcontext
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4736
- Accuracy: 0.7732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4749 | 1.0 | 50539 | 0.4736 | 0.7732 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
anjandash/JavaBERT-mini | e99875b4568117a9ee2b2e915e91c0679e8113dd | 2022-03-11T12:10:07.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"java",
"dataset:anjandash/java-8m-methods-v1",
"transformers",
"license:mit"
] | text-classification | false | anjandash | null | anjandash/JavaBERT-mini | 4 | null | transformers | 19,125 | ---
language:
- java
license: mit
datasets:
- anjandash/java-8m-methods-v1
---
|
Azu/trocr-handwritten-math | fc8dc9829360d42b1d4bc2f2668c831a72c80379 | 2022-03-11T13:00:38.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | Azu | null | Azu/trocr-handwritten-math | 4 | null | transformers | 19,126 | This model generate the math expression LATEX sequence according to the handwritten math expression image.
in CROHME 2014 test dataset CER=0.507772718700326 |
amir36/tutorial | e1f15d30e18934e9e372fd8e8f738562a6d6ea76 | 2022-03-12T02:33:07.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | amir36 | null | amir36/tutorial | 4 | null | transformers | 19,127 | Entry not found |
Splend1dchan/bert-large-uncased-slue-goldtrascription-e3-lr5e-5 | 7459fcee1fb3b1774c93d4f1b5684c99a37c173a | 2022-03-12T07:38:21.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Splend1dchan | null | Splend1dchan/bert-large-uncased-slue-goldtrascription-e3-lr5e-5 | 4 | null | transformers | 19,128 | Entry not found |
Mnauel/wav2vec2-base-finetuned-ks | 27b072f80eab4d58f648590d3652ff42221c8e18 | 2022-03-26T20:53:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | Mnauel | null | Mnauel/wav2vec2-base-finetuned-ks | 4 | null | transformers | 19,129 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5766
- Accuracy: 0.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7247 | 0.7462 |
| No log | 2.0 | 14 | 0.6844 | 0.7615 |
| 0.4279 | 3.0 | 21 | 0.7254 | 0.7462 |
| 0.4279 | 4.0 | 28 | 0.5891 | 0.8 |
| 0.4279 | 5.0 | 35 | 0.6991 | 0.7462 |
| 0.4478 | 6.0 | 42 | 0.6579 | 0.7615 |
| 0.4478 | 7.0 | 49 | 0.6164 | 0.8 |
| 0.4478 | 8.0 | 56 | 0.6191 | 0.8077 |
| 0.4194 | 9.0 | 63 | 0.5766 | 0.8308 |
| 0.4194 | 10.0 | 70 | 0.5704 | 0.8154 |
| 0.4194 | 11.0 | 77 | 0.6518 | 0.8 |
| 0.3833 | 12.0 | 84 | 0.6190 | 0.8077 |
| 0.3833 | 13.0 | 91 | 0.5693 | 0.8231 |
| 0.3833 | 14.0 | 98 | 0.5628 | 0.8231 |
| 0.3607 | 15.0 | 105 | 0.5741 | 0.8154 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
khavitidala/xlmroberta-large-fine-tuned-indo-hoax-classification | 029ed4ea6dddfbdec6230aa617597c4b982485af | 2022-03-13T02:01:19.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"multilingual",
"arxiv:1911.02116",
"transformers",
"exbert",
"license:mit"
] | text-classification | false | khavitidala | null | khavitidala/xlmroberta-large-fine-tuned-indo-hoax-classification | 4 | null | transformers | 19,130 | ---
tags:
- exbert
language: multilingual
inference: true
license: mit
---
# Fine-tuned version of XLM-RoBERTa (large-sized model)
fine tune by Ryan Abdurohman
# XLM-RoBERTa (large-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='xlm-roberta-large')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.10563907772302628,
'sequence': "Hello I'm a fashion model.",
'token': 54543,
'token_str': 'fashion'},
{'score': 0.08015287667512894,
'sequence': "Hello I'm a new model.",
'token': 3525,
'token_str': 'new'},
{'score': 0.033413201570510864,
'sequence': "Hello I'm a model model.",
'token': 3299,
'token_str': 'model'},
{'score': 0.030217764899134636,
'sequence': "Hello I'm a French model.",
'token': 92265,
'token_str': 'French'},
{'score': 0.026436051353812218,
'sequence': "Hello I'm a sexy model.",
'token': 17473,
'token_str': 'sexy'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-large")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=xlm-roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
anton-l/xtreme_s_xlsr_minds14_group_length | 887f84122af888d0c53e2989fb77b7923102b6eb | 2022-03-12T16:02:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"transformers"
] | audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_minds14_group_length | 4 | null | transformers | 19,131 | Entry not found |
clapika2010/soccer_finetuned | d162cdfc6753f93dbbb9e81c8d9e5d252b071056 | 2022-03-14T07:27:16.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | clapika2010 | null | clapika2010/soccer_finetuned | 4 | null | transformers | 19,132 | Entry not found |
MarioCarmona/gpt-j-pretrained | f4fed6067b242814c6c7394c960652d6c455d49e | 2022-03-13T11:08:27.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers",
"license:gpl-3.0"
] | text-generation | false | MarioCarmona | null | MarioCarmona/gpt-j-pretrained | 4 | null | transformers | 19,133 | ---
license: gpl-3.0
---
|
anwesham/indicbert_ar_ur | f7c1070645dbdab31170e5a09a3acdeb8622fe09 | 2022-03-13T10:50:12.000Z | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | anwesham | null | anwesham/indicbert_ar_ur | 4 | null | transformers | 19,134 | Entry not found |
MrAnderson/bert-base-2048-full-trivia-copied-embeddings | 671c973b063a3fd5331034c9152eb636ea722cf2 | 2022-03-13T19:57:05.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | MrAnderson | null | MrAnderson/bert-base-2048-full-trivia-copied-embeddings | 4 | null | transformers | 19,135 | Entry not found |
BAHIJA/bert-base-uncased-finetuned-sst2 | e7e0dafa4fac8bf6f72a04f61b769531b23a5385 | 2022-03-14T05:48:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | BAHIJA | null | BAHIJA/bert-base-uncased-finetuned-sst2 | 4 | null | transformers | 19,136 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9346330275229358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2745
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1778 | 1.0 | 4210 | 0.3553 | 0.9060 |
| 0.1257 | 2.0 | 8420 | 0.2745 | 0.9346 |
| 0.0779 | 3.0 | 12630 | 0.3272 | 0.9300 |
| 0.0655 | 4.0 | 16840 | 0.3412 | 0.9323 |
| 0.0338 | 5.0 | 21050 | 0.3994 | 0.9300 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
holtin/distilbert-base-uncased-holtin-finetuned-full-squad | c77ef9e64f8078a8941a7ed41f2117044dd3171f | 2022-04-05T15:38:26.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | holtin | null | holtin/distilbert-base-uncased-holtin-finetuned-full-squad | 4 | null | transformers | 19,137 | Entry not found |
GPL/fiqa-distilbert-tas-b-gpl-self_miner | 80546f30a73118b1b9cdfabf27b2fe444198eeaf | 2022-03-14T14:17:13.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | GPL | null | GPL/fiqa-distilbert-tas-b-gpl-self_miner | 4 | null | sentence-transformers | 19,138 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
alynneoya/bert-base-cased-pt-lenerbr | 676d28cb6476f7ab17f60c84246e03327e6702cf | 2022-03-15T01:22:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | alynneoya | null | alynneoya/bert-base-cased-pt-lenerbr | 4 | null | transformers | 19,139 | Entry not found |
joniponi/multilabel_inpatient_comments | b1e7b98469e1604d1b0693eb40a1e005649b386a | 2022-03-24T15:58:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | joniponi | null | joniponi/multilabel_inpatient_comments | 4 | null | transformers | 19,140 | Entry not found |
sanchit-gandhi/wav2vec2-2-rnd-2-layer-no-adapter | 2c6850f50bc6e1243cf9db0f26ca7f416ee49db2 | 2022-03-17T02:23:57.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-2-layer-no-adapter | 4 | null | transformers | 19,141 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8365
- Wer: 0.2812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.8017 | 1.68 | 1500 | 5.7161 | 1.3220 |
| 4.5907 | 3.36 | 3000 | 4.7936 | 0.9799 |
| 3.151 | 5.04 | 4500 | 4.1610 | 0.7752 |
| 1.5166 | 6.73 | 6000 | 3.5939 | 0.5343 |
| 2.4523 | 8.41 | 7500 | 4.0013 | 0.6954 |
| 1.423 | 10.09 | 9000 | 2.6917 | 0.4476 |
| 0.7882 | 11.77 | 10500 | 2.4493 | 0.3967 |
| 1.1643 | 13.45 | 12000 | 2.0629 | 0.3234 |
| 0.5352 | 15.13 | 13500 | 2.0625 | 0.3363 |
| 0.407 | 16.82 | 15000 | 1.8378 | 0.2812 |
| 0.1162 | 18.5 | 16500 | 1.8365 | 0.2812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kSaluja/bert-finetuned-ner | 1ae8e15c5dd082c3fd7b265a2f5ebd1928e11211 | 2022-03-15T23:18:41.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | kSaluja | null | kSaluja/bert-finetuned-ner | 4 | null | transformers | 19,142 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1555
- Precision: 0.9681
- Recall: 0.9670
- F1: 0.9675
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 253 | 0.1972 | 0.9467 | 0.9408 | 0.9437 | 0.9511 |
| 0.3572 | 2.0 | 506 | 0.1626 | 0.9677 | 0.9614 | 0.9645 | 0.9661 |
| 0.3572 | 3.0 | 759 | 0.1555 | 0.9681 | 0.9670 | 0.9675 | 0.9687 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dmahata/dlkp_test | f89bc10dc7ab6471c192b14224c9c220ee168aaa | 2022-03-16T02:49:18.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"license:mit",
"autotrain_compatible"
] | token-classification | false | dmahata | null | dmahata/dlkp_test | 4 | null | transformers | 19,143 | ---
license: mit
---
|
aws-ai/vascl-roberta-large | 3b9e9bf934c8dc7b42352121ec04b82bfbbfb3e0 | 2022-03-16T04:40:14.000Z | [
"pytorch",
"roberta",
"transformers",
"license:apache-2.0"
] | null | false | aws-ai | null | aws-ai/vascl-roberta-large | 4 | null | transformers | 19,144 | ---
license: apache-2.0
---
|
Neulvo/marian-finetuned-kde4-en-to-fr | 7bd814eebbb791f95748f174958451f895eb6dfc | 2022-03-16T15:04:48.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | Neulvo | null | Neulvo/marian-finetuned-kde4-en-to-fr | 4 | null | transformers | 19,145 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.893830905210194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8564
- Bleu: 52.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ai4bharat/MultiIndicWikiBioUnified | 0db0f137e28079d43a2ad0ede2095d033b70a61c | 2022-03-29T09:25:58.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"hi",
"kn",
"ml",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicWikiBio",
"arxiv:2203.05437",
"transformers",
"wikibio",
"multilingual",
"nlp",
"indicnlp",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicWikiBioUnified | 4 | null | transformers | 19,146 | ---
tags:
- wikibio
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicWikiBio
language:
- as
- bn
- hi
- kn
- ml
- or
- pa
- ta
- te
licenses:
- cc-by-nc-4.0
widget:
- <TAG> name </TAG> नवतेज भारती <TAG> image </TAG> NavtejBharati . jpg <TAG> birth name </TAG> नवतेज <TAG> birth date </TAG> 1938 <TAG> birth place </TAG> रोडे , भारतीय पंजाब , भारत । पंजाब <TAG> occupation </TAG> लेखक , कवि <TAG> nationality </TAG> कैनेडा । कैनेडियन <TAG> ethnicity </TAG> पंजाबी लोक । पंजाबी </s> <2hi>
---
# MultiIndicWikiBioUnified
MultiIndicWikiBioUnified is a multilingual, sequence-to-sequence pre-trained model, a [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint fine-tuned on the 9 languages of [IndicWikiBio](https://huggingface.co/datasets/ai4bharat/IndicWikiBio) dataset. For fine-tuning details,
see the [paper](https://arxiv.org/abs/2203.05437). You can use MultiIndicWikiBio to build biography generation applications for Indian languages by fine-tuning the model with supervised training data. Some salient features of the MultiIndicWikiBio are:
<ul>
<li >Supported languages: Assamese, Bengali, Hindi, Oriya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for fine-tuning and decoding. </li>
<li> Fine-tuned on an Indic language corpora (34,653 examples). </li>
<li> All languages have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
You can read more about MultiIndicWikiBioUnified in this <a href="https://arxiv.org/abs/2203.05437">paper</a>.
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicWikiBioUnified", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicWikiBioUnified")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2hi>', '<2kn>', '<2ml>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("<TAG> name </TAG> भीखा लाल <TAG> office </TAG> विधायक - 318 - हसनगंज विधान सभा निर्वाचन क्षेत्र , उत्तर प्रदेश <TAG> term </TAG> 1957 से 1962 <TAG> nationality </TAG> भारतीय</s><2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
out = tokenizer("<2hi> भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे। </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # भीखा लाल ,भारत के उत्तर प्रदेश की दूसरी विधानसभा सभा में विधायक रहे।
# Disclaimer
Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the [Indic NLP Library](https://github.com/AI4Bharat/indic-bart/blob/main/indic_scriptmap.py).
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
## Benchmarks
Scores on the `IndicWikiBio` test sets are as follows:
Language | RougeL
---------|----------------------------
as | 56.28
bn | 57.42
hi | 67.48
kn | 40.01
ml | 38.84
or | 67.13
pa | 52.88
ta | 51.82
te | 51.43
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
# License
The model is available under the MIT License. |
KBLab/megatron-bert-large-swedish-cased-110k | 1599fbd13fd64c43b92efac2c03b1e5a0ae8cdb7 | 2022-05-03T08:57:56.000Z | [
"pytorch",
"megatron-bert",
"sv",
"transformers"
] | null | false | KBLab | null | KBLab/megatron-bert-large-swedish-cased-110k | 4 | null | transformers | 19,147 | ---
language:
- sv
---
# Megatron-BERT-large Swedish 110k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-large with 340M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 110k training steps using a batch size of 8k; the number of training steps is set to 500k, meaning that this version is a checkpoint.
The hyperparameters for training followed the setting for RoBERTa.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
and a later checkpoint
- [Megatron-BERT-large-165k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). |
internetoftim/BERT-Finetuning-Demo | 56c37e537f0b347ed215b9bf5492953219c72122 | 2022-04-02T17:32:16.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | internetoftim | null | internetoftim/BERT-Finetuning-Demo | 4 | null | transformers | 19,148 | Entry not found |
moshew/paraphrase-mpnet-base-v2_SetFit_emotions | a9816630452e7bbb461c36ef0fdb2861c8091109 | 2022-03-18T07:16:29.000Z | [
"pytorch",
"mpnet",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | moshew | null | moshew/paraphrase-mpnet-base-v2_SetFit_emotions | 4 | null | sentence-transformers | 19,149 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_emotions
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_emotions')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_emotions)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cammy/PRIMERA-100-MDS-own1 | b04d9d62794f7db54511c7ef5e6a48f6b3e04338 | 2022-03-18T09:52:35.000Z | [
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cammy | null | cammy/PRIMERA-100-MDS-own1 | 4 | null | transformers | 19,150 | Entry not found |
nherve/flaubert-oral-asr | b0d93989ade9bd0f4fbfda7f68a37c804a33ab05 | 2022-04-04T10:26:23.000Z | [
"pytorch",
"flaubert",
"fr",
"transformers",
"bert",
"language-model",
"french",
"flaubert-base",
"uncased",
"asr",
"speech",
"oral",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"license:mit"
] | null | false | nherve | null | nherve/flaubert-oral-asr | 4 | null | transformers | 19,151 | ---
language: fr
license: mit
tags:
- bert
- language-model
- flaubert
- french
- flaubert-base
- uncased
- asr
- speech
- oral
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
---
# FlauBERT-Oral models: Using ASR-Generated Text for Spoken Language Modeling
**FlauBERT-Oral** are French BERT models trained on a very large amount of automatically transcribed speech from 350,000 hours of diverse French TV shows. They were trained with the [**FlauBERT software**](https://github.com/getalp/Flaubert) using the same parameters as the [flaubert-base-uncased](https://huggingface.co/flaubert/flaubert_base_uncased) model (12 layers, 12 attention heads, 768 dims, 137M parameters, uncased).
## Available FlauBERT-Oral models
- `flaubert-oral-asr` : trained from scratch on ASR data, keeping the BPE tokenizer and vocabulary of flaubert-base-uncased
- `flaubert-oral-asr_nb` : trained from scratch on ASR data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-mixed` : trained from scratch on a mixed corpus of ASR and text data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-ft` : fine-tuning of flaubert-base-uncased for a few epochs on ASR data
## Usage for sequence classification
```python
flaubert_tokenizer = FlaubertTokenizer.from_pretrained("nherve/flaubert-oral-asr")
flaubert_classif = FlaubertForSequenceClassification.from_pretrained("nherve/flaubert-oral-asr", num_labels=14)
flaubert_classif.sequence_summary.summary_type = 'mean'
# Then, train your model
```
## References
If you use FlauBERT-Oral models for your scientific publication, or if you find the resources in this repository useful, please cite the following papers:
```
@InProceedings{herve2022flaubertoral,
author = {Herv\'{e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent},
title = {Using ASR-Generated Text for Spoken Language Modeling},
booktitle = {Proceedings of "Challenges & Perspectives in Creating Large Language Models" ACL 2022 Workshop},
month = {May},
year = {2022}
}
```
|
nherve/flaubert-oral-mixed | 1184f1d9c9deec917f002fe5e1d2d77c421ee7e8 | 2022-04-04T10:26:49.000Z | [
"pytorch",
"flaubert",
"fr",
"transformers",
"bert",
"language-model",
"french",
"flaubert-base",
"uncased",
"asr",
"speech",
"oral",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"license:mit"
] | null | false | nherve | null | nherve/flaubert-oral-mixed | 4 | null | transformers | 19,152 | ---
language: fr
license: mit
tags:
- bert
- language-model
- flaubert
- french
- flaubert-base
- uncased
- asr
- speech
- oral
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
---
# FlauBERT-Oral models: Using ASR-Generated Text for Spoken Language Modeling
**FlauBERT-Oral** are French BERT models trained on a very large amount of automatically transcribed speech from 350,000 hours of diverse French TV shows. They were trained with the [**FlauBERT software**](https://github.com/getalp/Flaubert) using the same parameters as the [flaubert-base-uncased](https://huggingface.co/flaubert/flaubert_base_uncased) model (12 layers, 12 attention heads, 768 dims, 137M parameters, uncased).
## Available FlauBERT-Oral models
- `flaubert-oral-asr` : trained from scratch on ASR data, keeping the BPE tokenizer and vocabulary of flaubert-base-uncased
- `flaubert-oral-asr_nb` : trained from scratch on ASR data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-mixed` : trained from scratch on a mixed corpus of ASR and text data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-ft` : fine-tuning of flaubert-base-uncased for a few epochs on ASR data
## Usage for sequence classification
```python
flaubert_tokenizer = FlaubertTokenizer.from_pretrained("nherve/flaubert-oral-asr")
flaubert_classif = FlaubertForSequenceClassification.from_pretrained("nherve/flaubert-oral-asr", num_labels=14)
flaubert_classif.sequence_summary.summary_type = 'mean'
# Then, train your model
```
## References
If you use FlauBERT-Oral models for your scientific publication, or if you find the resources in this repository useful, please cite the following papers:
```
@InProceedings{herve2022flaubertoral,
author = {Herv\'{e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent},
title = {Using ASR-Generated Text for Spoken Language Modeling},
booktitle = {Proceedings of "Challenges & Perspectives in Creating Large Language Models" ACL 2022 Workshop},
month = {May},
year = {2022}
}
```
|
acsxz/distilbert-base-uncased-finetuned-emotion | 36ef88d60422ffc274216d7dc6f01dd023d56d4c | 2022-03-22T17:00:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | acsxz | null | acsxz/distilbert-base-uncased-finetuned-emotion | 4 | null | transformers | 19,153 | Entry not found |
eliasws/openApiT5-distilled-description-v1 | 2e107db74736be807625b080086fc55624fe5565 | 2022-03-18T18:47:47.000Z | [
"pytorch",
"t5",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | eliasws | null | eliasws/openApiT5-distilled-description-v1 | 4 | null | sentence-transformers | 19,154 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3681 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3681,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
msamogh/autonlp-cai-out-of-scope-649919118 | 1b5795926f27f71ad8bf1e13faed05ffafaa067b | 2022-03-19T21:40:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | msamogh | null | msamogh/autonlp-cai-out-of-scope-649919118 | 4 | null | transformers | 19,155 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.3996916853309825
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919118
- CO2 Emissions (in grams): 0.3996916853309825
## Validation Metrics
- Loss: 0.48289698362350464
- Accuracy: 0.8064516129032258
- Precision: 0.828125
- Recall: 0.8833333333333333
- AUC: 0.8353535353535354
- F1: 0.8548387096774193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919118
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Gunulhona/tb_pretrained | e7d059e30afcaf9c119aedc1a205eaf02f1501e9 | 2022-06-15T02:51:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | Gunulhona | null | Gunulhona/tb_pretrained | 4 | null | transformers | 19,156 | ---
license: cc-by-nc-sa-4.0
---
|
mnavas/roberta-finetuned-CPV_Spanish | a328259a982343132765efe082919491361a474d | 2022-05-17T09:06:28.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | mnavas | null | mnavas/roberta-finetuned-CPV_Spanish | 4 | null | transformers | 19,157 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: roberta-finetuned-CPV_Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-CPV_Spanish
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on a dataset derived from Spanish Public Procurement documents from 2019. The whole fine-tuning process is available in the following [Kaggle notebook](https://www.kaggle.com/code/marianavasloro/fine-tuned-roberta-for-spanish-cpv-codes).
It achieves the following results on the evaluation set:
- Loss: 0.0465
- F1: 0.7918
- Roc Auc: 0.8860
- Accuracy: 0.7376
- Coverage Error: 10.2744
- Label Ranking Average Precision Score: 0.7973
## Intended uses & limitations
This model only predicts the first two digits of the CPV codes. The list of divisions CPV codes is the following:
| Division | English | Spanish | | | |
|----------|:----------------------------------------------------------------------------------------------------------------:|----------------------------------------------------------------------------------------------------------------------------------------------------|:-:|:-:|:-:|
| 03 | Agricultural, farming, fishing, forestry and related products | Productos de la agricultura, ganadería, pesca, silvicultura y productos afines | | | |
| 09 | Petroleum products, fuel, electricity and other sources of energy | Derivados del petróleo, combustibles, electricidad y otras fuentes de energía | | | |
| 14 | Mining, basic metals and related products | Productos de la minería, de metales de base y productos afines | | | |
| 15 | Food, beverages, tobacco and related products | Alimentos, bebidas, tabaco y productos afines | | | |
| 16 | Agricultural machinery | Maquinaria agrícola | | | |
| 18 | Clothing, footwear, luggage articles and accessories | Prendas de vestir, calzado, artículos de viaje y accesorios | | | |
| 19 | Leather and textile fabrics, plastic and rubber materials | Piel y textiles, materiales de plástico y caucho | | | |
| 22 | Printed matter and related products | Impresos y productos relacionados | | | |
| 24 | Chemical products | Productos químicos | | | |
| 30 | Office and computing machinery, equipment and supplies except furniture and software packages | Máquinas, equipo y artículos de oficina y de informática, excepto mobiliario y paquetes de software | | | |
| 31 | Electrical machinery, apparatus, equipment and consumables; lighting | Máquinas, aparatos, equipo y productos consumibles eléctricos; iluminación | | | |
| 32 | Radio, television, communication, telecommunication and related equipment | Equipos de radio, televisión, comunicaciones y telecomunicaciones y equipos conexos | | | |
| 33 | Medical equipments, pharmaceuticals and personal care products | Equipamiento y artículos médicos, farmacéuticos y de higiene personal | | | |
| 34 | Transport equipment and auxiliary products to transportation | Equipos de transporte y productos auxiliares | | | |
| 35 | Security, fire | Equipo de seguridad, extinción de incendios, policía y defensa | | | |
| 37 | Musical instruments, sport goods, games, toys, handicraft, art materials and accessories | Instrumentos musicales, artículos deportivos, juegos, juguetes, artículos de artesanía, materiales artísticos y accesorios | | | |
| 38 | Laboratory, optical and precision equipments (excl. glasses) | Equipo de laboratorio, óptico y de precisión (excepto gafas) | | | |
| 39 | Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products | Mobiliario (incluido el de oficina), complementos de mobiliario, aparatos electrodomésticos (excluida la iluminación) y productos de limpieza | | | |
| 41 | Collected and purified water | Agua recogida y depurada | | | |
| 42 | Industrial machinery | Maquinaria industrial | | | |
| 43 | Machinery for mining, quarrying, construction equipment | Maquinaria para la minería y la explotación de canteras y equipo de construcción | | | |
| 44 | Construction structures and materials; auxiliary products to construction (except electric apparatus) | Estructuras y materiales de construcción; productos auxiliares para la construcción (excepto aparatos eléctricos) | | | |
| 45 | Construction work | Trabajos de construcción | | | |
| 48 | Software package and information systems | Paquetes de software y sistemas de información | | | |
| 50 | Repair and maintenance services | Servicios de reparación y mantenimiento | | | |
| 51 | Installation services (except software) | Servicios de instalación (excepto software) | | | |
| 55 | Hotel, restaurant and retail trade services | Servicios comerciales al por menor de hostelería y restauración | | | |
| 60 | Transport services (excl. Waste transport) | Servicios de transporte (excluido el transporte de residuos) | | | |
| 63 | Supporting and auxiliary transport services; travel agencies services | Servicios de transporte complementarios y auxiliares; servicios de agencias de viajes | | | |
| 64 | Postal and telecommunications services | Servicios de correos y telecomunicaciones | | | |
| 65 | Public utilities | Servicios públicos | | | |
| 66 | Financial and insurance services | Servicios financieros y de seguros | | | |
| 70 | Real estate services | Servicios inmobiliarios | | | |
| 71 | Architectural, construction, engineering and inspection services | Servicios de arquitectura, construcción, ingeniería e inspección | | | |
| 72 | IT services: consulting, software development, Internet and support | Servicios TI: consultoría, desarrollo de software, Internet y apoyo | | | |
| 73 | Research and development services and related consultancy services | Servicios de investigación y desarrollo y servicios de consultoría conexos | | | |
| 75 | Administration, defence and social security services | Servicios de administración pública, defensa y servicios de seguridad social | | | |
| 76 | Services related to the oil and gas industry | Servicios relacionados con la industria del gas y del petróleo | | | |
| 77 | Agricultural, forestry, horticultural, aquacultural and apicultural services | Servicios agrícolas, forestales, hortícolas, acuícolas y apícolas | | | |
| 79 | Business services: law, marketing, consulting, recruitment, printing and security | Servicios a empresas: legislación, mercadotecnia, asesoría, selección de personal, imprenta y seguridad | | | |
| 80 | Education and training services | Servicios de enseñanza y formación | | | |
| 85 | Health and social work services | Servicios de salud y asistencia social | | | |
| 90 | Sewage, refuse, cleaning and environmental services | Servicios de alcantarillado, basura, limpieza y medio ambiente | | | |
| 92 | Recreational, cultural and sporting services | Servicios de esparcimiento, culturales y deportivos | | | |
| 98 | Other community, social and personal services | Otros servicios comunitarios, sociales o personales | | | |
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Coverage Error | Label Ranking Average Precision Score |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|:--------------:|:-------------------------------------:|
| 0.0354 | 1.0 | 9054 | 0.0362 | 0.7560 | 0.8375 | 0.6963 | 14.0835 | 0.7357 |
| 0.0311 | 2.0 | 18108 | 0.0331 | 0.7756 | 0.8535 | 0.7207 | 12.7880 | 0.7633 |
| 0.0235 | 3.0 | 27162 | 0.0333 | 0.7823 | 0.8705 | 0.7283 | 11.5179 | 0.7811 |
| 0.0157 | 4.0 | 36216 | 0.0348 | 0.7821 | 0.8699 | 0.7274 | 11.5836 | 0.7798 |
| 0.011 | 5.0 | 45270 | 0.0377 | 0.7799 | 0.8787 | 0.7239 | 10.9173 | 0.7841 |
| 0.008 | 6.0 | 54324 | 0.0395 | 0.7854 | 0.8787 | 0.7309 | 10.9042 | 0.7879 |
| 0.0042 | 7.0 | 63378 | 0.0421 | 0.7872 | 0.8823 | 0.7300 | 10.5687 | 0.7903 |
| 0.0025 | 8.0 | 72432 | 0.0439 | 0.7884 | 0.8867 | 0.7305 | 10.2220 | 0.7934 |
| 0.0015 | 9.0 | 81486 | 0.0456 | 0.7889 | 0.8872 | 0.7316 | 10.1781 | 0.7945 |
| 0.001 | 10.0 | 90540 | 0.0465 | 0.7918 | 0.8860 | 0.7376 | 10.2744 | 0.7973 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
### Aknowledgments
This work has been supported by NextProcurement European Action (grant agreement INEA/CEF/ICT/A2020/2373713-Action 2020-ES-IA-0255) and the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with Universidad Politécnica de Madrid in the line Support for R&D projects for Beatriz Galindo researchers, in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). We also acknowledge the participation of Jennifer Tabita for the preparation of the initial set of notebooks, and the AI4Gov master students from the first cohort for their validation of the approach. Source of the data: Ministerio de Hacienda. |
beston91/gpt2-xl_ft_logits_1k_2 | 8227d2953badd2a71ac843ae28efd8818d9c5a88 | 2022-03-21T11:27:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | beston91 | null | beston91/gpt2-xl_ft_logits_1k_2 | 4 | null | transformers | 19,158 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_1k_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_1k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 6.0743 |
| No log | 1.91 | 10 | 6.1649 |
| No log | 2.91 | 15 | 6.3068 |
| No log | 3.91 | 20 | 6.4793 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59307861328125 |
feiyangDu/bert-base-cased-0210-celential | 3eebf08630a69b3fb04f6abf0d0dbf9e4dc1a689 | 2022-03-21T07:54:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | feiyangDu | null | feiyangDu/bert-base-cased-0210-celential | 4 | null | transformers | 19,159 | |
doctorlan/autonlp-ctrip-653519223 | 43a0bfa03cace789842bf62287e23ced5936a1c9 | 2022-03-21T09:01:53.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:doctorlan/autonlp-data-ctrip",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | doctorlan | null | doctorlan/autonlp-ctrip-653519223 | 4 | null | transformers | 19,160 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-ctrip
co2_eq_emissions: 24.879856894708393
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653519223
- CO2 Emissions (in grams): 24.879856894708393
## Validation Metrics
- Loss: 0.14671853184700012
- Accuracy: 0.9676666666666667
- Precision: 0.9794159885112494
- Recall: 0.9742857142857143
- AUC: 0.9901396825396825
- F1: 0.9768441155407017
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-ctrip-653519223
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Yaxin/electra-small-discriminator-yelp-mlm | 36e38f5a61fd68b81f019a70af056ee37cc0d071 | 2022-03-21T09:21:02.000Z | [
"pytorch",
"electra",
"fill-mask",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Yaxin | null | Yaxin/electra-small-discriminator-yelp-mlm | 4 | null | transformers | 19,161 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test-electra-small-yelp
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.5677007577622891
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-electra-small-yelp
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2601
- Accuracy: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gagan3012/TrOCR-Ar-Small | 0fcc2872a3a7afe992818cce2ef6e15ebff27522 | 2022-03-21T20:49:02.000Z | [
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"ar",
"transformers",
"generated_from_trainer",
"trocr",
"model-index"
] | null | false | gagan3012 | null | gagan3012/TrOCR-Ar-Small | 4 | null | transformers | 19,162 | ---
tags:
- generated_from_trainer
- trocr
language: ar
model-index:
- name: TrOCR-Ar-Small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TrOCR-Ar-Small
This model is a fine-tuned version of [microsoft/trocr-small-stage1](https://huggingface.co/microsoft/trocr-small-stage1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2771
- Cer: 0.8211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6363 | 0.14 | 1000 | 2.7594 | 0.9370 |
| 2.7508 | 0.29 | 2000 | 2.6589 | 0.8901 |
| 2.6519 | 0.43 | 3000 | 2.6059 | 0.8647 |
| 2.5936 | 0.57 | 4000 | 2.5360 | 0.7941 |
| 2.5069 | 0.72 | 5000 | 2.4701 | 0.8262 |
| 2.4606 | 0.86 | 6000 | 2.4427 | 0.7552 |
| 2.4046 | 1.0 | 7000 | 2.4262 | 0.7822 |
| 2.3628 | 1.15 | 8000 | 2.3880 | 0.8186 |
| 2.3458 | 1.29 | 9000 | 2.3589 | 0.8262 |
| 2.3062 | 1.43 | 10000 | 2.3704 | 0.8693 |
| 2.2884 | 1.58 | 11000 | 2.3065 | 0.8034 |
| 2.263 | 1.72 | 12000 | 2.3413 | 0.8545 |
| 2.2473 | 1.86 | 13000 | 2.3314 | 0.7996 |
| 2.2318 | 2.01 | 14000 | 2.3034 | 0.8254 |
| 2.2004 | 2.15 | 15000 | 2.3068 | 0.8461 |
| 2.1774 | 2.29 | 16000 | 2.2799 | 0.8207 |
| 2.1684 | 2.44 | 17000 | 2.2746 | 0.8249 |
| 2.1637 | 2.58 | 18000 | 2.2540 | 0.7797 |
| 2.1418 | 2.72 | 19000 | 2.2595 | 0.7937 |
| 2.1309 | 2.87 | 20000 | 2.2771 | 0.8211 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
azwierzc/vilt-b32-finetuned-vqa-pl | bfa882cd2e550ba01a8c7e59e0bd8c756435ccb2 | 2022-03-21T12:01:07.000Z | [
"pytorch",
"vilt",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | azwierzc | null | azwierzc/vilt-b32-finetuned-vqa-pl | 4 | null | transformers | 19,163 | Entry not found |
PSW/ut_del_two_per_each_ver4 | 33744b3ad329038967dc4de79f181362150d3950 | 2022-03-21T15:26:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_per_each_ver4 | 4 | null | transformers | 19,164 | Entry not found |
Messerschmitt/is712 | b49330c1689fd31cba06b59c600670a7e7eab2c2 | 2022-03-21T15:50:48.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:afl-3.0"
] | text-classification | false | Messerschmitt | null | Messerschmitt/is712 | 4 | null | transformers | 19,165 | ---
license: afl-3.0
---
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN | 3cc38a4f05593e103587811d453f9d46f77ec398 | 2022-03-21T22:10:39.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN | 4 | null | transformers | 19,166 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_EN
This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Precision: 0.8366
- Recall: 0.8513
- F1: 0.8439
- Accuracy: 0.9681
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated. To improve F1 score the transfer learning was completed in two steps. Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0129 | 1.0 | 1360 | 0.2119 | 0.8404 | 0.8364 | 0.8384 | 0.9666 |
| 0.0072 | 2.0 | 2720 | 0.2132 | 0.8173 | 0.8583 | 0.8373 | 0.9662 |
| 0.0042 | 3.0 | 4080 | 0.2180 | 0.8410 | 0.8515 | 0.8462 | 0.9686 |
| 0.0019 | 4.0 | 5440 | 0.2308 | 0.8366 | 0.8513 | 0.8439 | 0.9681 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
danyaljj/gpt-j-6B-step-318500 | f7c3f029e6222f5742f2883245f836aa1292d564 | 2022-03-22T23:10:51.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-318500 | 4 | null | transformers | 19,167 | Entry not found |
danyaljj/gpt-j-6B-step-328500 | cb96aeb8facf15e9b249034cd324a0d2b2415555 | 2022-03-22T23:09:11.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-328500 | 4 | null | transformers | 19,168 | Entry not found |
danyaljj/gpt-j-6B-step-382500 | 132e906af4b00475b0f8b13f165c8cb3a08b32d5 | 2022-03-22T23:11:56.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-382500 | 4 | null | transformers | 19,169 | Entry not found |
danyaljj/gpt-j-6B-step-384500 | 0e77c3aee79f2e9ea93044bd6bf9bf58c4117c84 | 2022-03-22T23:12:34.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt-j-6B-step-384500 | 4 | null | transformers | 19,170 | Entry not found |
PSW/ut_del_two_per_each_ver5 | 0e0c1fe30290770a775bfef93ba0f5ba0a9cb800 | 2022-03-22T05:54:08.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | PSW | null | PSW/ut_del_two_per_each_ver5 | 4 | null | transformers | 19,171 | Entry not found |
mukayese/mbart-large-turkish-sum | 84e561c757926a852674edaf0f3a127672af8b73 | 2022-03-22T14:32:33.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"dataset:mlsum",
"arxiv:2203.01215",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mukayese | null | mukayese/mbart-large-turkish-sum | 4 | 1 | transformers | 19,172 | ---
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mbart-large-turkish-sum
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum tu
type: mlsum
args: tu
metrics:
- name: Rouge1
type: rouge
value: 46.7011
---
# [Mukayese: Turkish NLP Strikes Back](https://arxiv.org/abs/2203.01215)
## Summarization: mukayese/mbart-large-turkish-sum
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the mlsum/tu dataset.
It achieves the following results on the evaluation set:
- Rouge1: 46.7011
- Rouge2: 34.0087
- Rougel: 41.5475
- Rougelsum: 43.2108
Check [this](https://arxiv.org/abs/2203.01215) paper for more details on the model and the dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.2+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
### Citation
```
@misc{safaya-etal-2022-mukayese,
title={Mukayese: Turkish NLP Strikes Back},
author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
year={2022},
eprint={2203.01215},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
aaraki/wav2vec2-base-finetuned-ks | 6d817bf7a9faf669bce87c71cd27cf56d5376c3e | 2022-03-23T05:55:15.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:superb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | aaraki | null | aaraki/wav2vec2-base-finetuned-ks | 4 | null | transformers | 19,173 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9949
- Accuracy: 0.6958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0231 | 1.0 | 399 | 0.9949 | 0.6958 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
FuriouslyAsleep/markingMultiClass | 5cb73a696e37ce06aa1ed2d15d3c8b840c281e9a | 2022-03-23T09:21:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:FuriouslyAsleep/autotrain-data-markingClassifier",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | FuriouslyAsleep | null | FuriouslyAsleep/markingMultiClass | 4 | null | transformers | 19,174 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- FuriouslyAsleep/autotrain-data-markingClassifier
co2_eq_emissions: 0.5712537632313806
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 661319476
- CO2 Emissions (in grams): 0.5712537632313806
## Validation Metrics
- Loss: 0.859619140625
- Accuracy: 0.8
- Macro F1: 0.6
- Micro F1: 0.8000000000000002
- Weighted F1: 0.72
- Macro Precision: 0.5555555555555555
- Micro Precision: 0.8
- Weighted Precision: 0.6666666666666666
- Macro Recall: 0.6666666666666666
- Micro Recall: 0.8
- Weighted Recall: 0.8
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/FuriouslyAsleep/autonlp-markingClassifier-661319476
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("FuriouslyAsleep/autonlp-markingClassifier-661319476", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("FuriouslyAsleep/autonlp-markingClassifier-661319476", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
nherve/flaubert-oral-ft | c383fafc0d6e70c1e1f5a9a2b8eef2f005bbfa27 | 2022-04-04T10:27:14.000Z | [
"pytorch",
"fr",
"transformers",
"bert",
"language-model",
"flaubert",
"french",
"flaubert-base",
"uncased",
"asr",
"speech",
"oral",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"license:mit"
] | null | false | nherve | null | nherve/flaubert-oral-ft | 4 | null | transformers | 19,175 | ---
language: fr
license: mit
tags:
- bert
- language-model
- flaubert
- french
- flaubert-base
- uncased
- asr
- speech
- oral
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
---
# FlauBERT-Oral models: Using ASR-Generated Text for Spoken Language Modeling
**FlauBERT-Oral** are French BERT models trained on a very large amount of automatically transcribed speech from 350,000 hours of diverse French TV shows. They were trained with the [**FlauBERT software**](https://github.com/getalp/Flaubert) using the same parameters as the [flaubert-base-uncased](https://huggingface.co/flaubert/flaubert_base_uncased) model (12 layers, 12 attention heads, 768 dims, 137M parameters, uncased).
## Available FlauBERT-Oral models
- `flaubert-oral-asr` : trained from scratch on ASR data, keeping the BPE tokenizer and vocabulary of flaubert-base-uncased
- `flaubert-oral-asr_nb` : trained from scratch on ASR data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-mixed` : trained from scratch on a mixed corpus of ASR and text data, BPE tokenizer is also trained on the same corpus
- `flaubert-oral-ft` : fine-tuning of flaubert-base-uncased for a few epochs on ASR data
## Usage for sequence classification
```python
flaubert_tokenizer = FlaubertTokenizer.from_pretrained("nherve/flaubert-oral-asr")
flaubert_classif = FlaubertForSequenceClassification.from_pretrained("nherve/flaubert-oral-asr", num_labels=14)
flaubert_classif.sequence_summary.summary_type = 'mean'
# Then, train your model
```
## References
If you use FlauBERT-Oral models for your scientific publication, or if you find the resources in this repository useful, please cite the following papers:
```
@InProceedings{herve2022flaubertoral,
author = {Herv\'{e}, Nicolas and Pelloin, Valentin and Favre, Benoit and Dary, Franck and Laurent, Antoine and Meignier, Sylvain and Besacier, Laurent},
title = {Using ASR-Generated Text for Spoken Language Modeling},
booktitle = {Proceedings of "Challenges & Perspectives in Creating Large Language Models" ACL 2022 Workshop},
month = {May},
year = {2022}
}
```
|
lysandre/sharded-repo | 143809b2b1a9372372ed0f34657ec9d067c2716e | 2022-03-23T14:23:25.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | lysandre | null | lysandre/sharded-repo | 4 | null | transformers | 19,176 | Entry not found |
gpucce/ProSolAdv_full_train | 2bc19ea2962318837cc6d30ad68939cd46cc0217 | 2022-03-23T21:17:05.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | gpucce | null | gpucce/ProSolAdv_full_train | 4 | null | transformers | 19,177 | Entry not found |
clisi2000/distilbert-base-uncased-distilled-clinc | 6a0e3b0733aa82c4753fa91edc26dcd0da55e545 | 2022-03-24T03:50:04.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | clisi2000 | null | clisi2000/distilbert-base-uncased-distilled-clinc | 4 | null | transformers | 19,178 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
|
yy642/bert-base-uncased-finetuned-rte-max-length-256-epoch-5 | 044d4f4b605df2fe0595acaa8eaccdfc001a820e | 2022-03-24T05:01:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-rte-max-length-256-epoch-5 | 4 | null | transformers | 19,179 | Entry not found |
Helsinki-NLP/opus-mt-tc-base-uk-ro | 09a793f34ff065f0dc321db6c8de7b9467a2968d | 2022-06-01T13:10:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ro",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-uk-ro | 4 | null | transformers | 19,180 | ---
language:
- ro
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-uk-ro
results:
- task:
name: Translation ukr-ron
type: translation
args: ukr-ron
dataset:
name: flores101-devtest
type: flores_101
args: ukr ron devtest
metrics:
- name: BLEU
type: bleu
value: 27.7
---
# opus-mt-tc-base-uk-ro
Neural machine translation model for translating from Ukrainian (uk) to Romanian (ro).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s):
* target language(s):
* valid target language labels:
* model: transformer-align
* data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ron/opusTCv20210807+pft_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT ukr-ron README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ron/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ron<< Стаття висловлює особисту думку автора.",
">>ron<< Качкодзьоби живуть на сході Австрії."
]
model_name = "pytorch-models/opus-mt-tc-base-uk-ro"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Articolul exprimă opinia personală a autorului.
# Kachkojiobi trăiesc în estul Austriei.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-ro")
print(pipe(">>ron<< Стаття висловлює особисту думку автора."))
# expected output: Articolul exprimă opinia personală a autorului.
```
## Benchmarks
* test set translations: [opusTCv20210807+pft_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ron/opusTCv20210807+pft_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ron/opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ukr-ron | flores101-devtest | 0.55343 | 27.7 | 1012 | 26799 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: f084bad
* port time: Wed Mar 23 21:48:27 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zle-fi | 8be545c3213f82fe96e2b406f8f924875056c0d2 | 2022-06-01T13:10:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zle-fi | 4 | null | transformers | 19,181 | ---
language:
- fi
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zle-fi
results:
- task:
name: Translation rus-fin
type: translation
args: rus-fin
dataset:
name: flores101-devtest
type: flores_101
args: rus fin devtest
metrics:
- name: BLEU
type: bleu
value: 17.4
- task:
name: Translation ukr-fin
type: translation
args: ukr-fin
dataset:
name: flores101-devtest
type: flores_101
args: ukr fin devtest
metrics:
- name: BLEU
type: bleu
value: 18.0
- task:
name: Translation rus-fin
type: translation
args: rus-fin
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: rus-fin
metrics:
- name: BLEU
type: bleu
value: 42.2
---
# opus-mt-tc-big-zle-fi
Neural machine translation model for translating from East Slavic languages (zle) to Finnish (fi).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s): rus ukr
* target language(s): fin
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.zip)
* more information released models: [OPUS-MT zle-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-fin/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Мы уже проголосовали.",
"Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять."
]
model_name = "pytorch-models/opus-mt-tc-big-zle-fi"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Olemme jo äänestäneet.
# Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-fi")
print(pipe("Мы уже проголосовали."))
# expected output: Olemme jo äänestäneet.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| rus-fin | tatoeba-test-v2021-08-07 | 0.66334 | 42.2 | 3643 | 19319 |
| rus-fin | flores101-devtest | 0.52577 | 17.4 | 1012 | 18781 |
| ukr-fin | flores101-devtest | 0.53440 | 18.0 | 1012 | 18781 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 42126b6
* port time: Thu Mar 24 09:28:52 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-de-zle | a7cb10ebdc267e47b4026c545713a6ca7925a429 | 2022-06-01T13:08:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"de",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-de-zle | 4 | null | transformers | 19,182 | ---
language:
- be
- de
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-de-zle
results:
- task:
name: Translation deu-rus
type: translation
args: deu-rus
dataset:
name: flores101-devtest
type: flores_101
args: deu rus devtest
metrics:
- name: BLEU
type: bleu
value: 26.3
- task:
name: Translation deu-ukr
type: translation
args: deu-ukr
dataset:
name: flores101-devtest
type: flores_101
args: deu ukr devtest
metrics:
- name: BLEU
type: bleu
value: 24.2
- task:
name: Translation deu-bel
type: translation
args: deu-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-bel
metrics:
- name: BLEU
type: bleu
value: 29.5
- task:
name: Translation deu-rus
type: translation
args: deu-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-rus
metrics:
- name: BLEU
type: bleu
value: 46.1
- task:
name: Translation deu-ukr
type: translation
args: deu-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: deu-ukr
metrics:
- name: BLEU
type: bleu
value: 40.7
- task:
name: Translation deu-rus
type: translation
args: deu-rus
dataset:
name: newstest2012
type: wmt-2012-news
args: deu-rus
metrics:
- name: BLEU
type: bleu
value: 20.8
- task:
name: Translation deu-rus
type: translation
args: deu-rus
dataset:
name: newstest2013
type: wmt-2013-news
args: deu-rus
metrics:
- name: BLEU
type: bleu
value: 24.9
---
# opus-mt-tc-big-de-zle
Neural machine translation model for translating from German (de) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): deu
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT deu-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>ukr<< Der Soldat hat mir Wasser gegeben.",
">>ukr<< Ich will hier nicht essen."
]
model_name = "pytorch-models/opus-mt-tc-big-de-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Солдат дав мені воду.
# Я не хочу тут їсти.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-de-zle")
print(pipe(">>ukr<< Der Soldat hat mir Wasser gegeben."))
# expected output: Солдат дав мені воду.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| deu-bel | tatoeba-test-v2021-08-07 | 0.53128 | 29.5 | 551 | 3601 |
| deu-rus | tatoeba-test-v2021-08-07 | 0.67143 | 46.1 | 12800 | 87296 |
| deu-ukr | tatoeba-test-v2021-08-07 | 0.62737 | 40.7 | 10319 | 56287 |
| deu-rus | flores101-devtest | 0.54152 | 26.3 | 1012 | 23295 |
| deu-ukr | flores101-devtest | 0.53286 | 24.2 | 1012 | 22810 |
| deu-rus | newstest2012 | 0.49409 | 20.8 | 3003 | 64790 |
| deu-rus | newstest2013 | 0.52631 | 24.9 | 3000 | 58560 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 01:29:09 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-base-fi-uk | 97b18786a291e3e92104c025ca9afc3f6b9959f9 | 2022-06-01T13:08:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-fi-uk | 4 | null | transformers | 19,183 | ---
language:
- fi
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
---
# opus-mt-tc-base-fi-uk
Neural machine translation model for translating from Finnish (fi) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-07
* source language(s): fin
* target language(s): ukr
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.zip)
* more information released models: [OPUS-MT fin-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-ukr/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Afrikka on ihmiskunnan kehto.",
"Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen."
]
model_name = "pytorch-models/opus-mt-tc-base-fi-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Африка є колискою людства.
# Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-fi-uk")
print(pipe("Afrikka on ihmiskunnan kehto."))
# expected output: Африка є колискою людства.
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-ukr/opusTCv20210807+pbt_transformer-align_2022-03-07.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fin-ukr | flores101-devtest | 0.49562 | 19.7 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 02:00:05 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-fr-zle | 206f37c32fd23dbf78b90c2e298682f02ffcb18d | 2022-06-01T13:08:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"fr",
"ru",
"uk",
"zle",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-fr-zle | 4 | null | transformers | 19,184 | ---
language:
- be
- fr
- ru
- uk
- zle
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-fr-zle
results:
- task:
name: Translation fra-rus
type: translation
args: fra-rus
dataset:
name: flores101-devtest
type: flores_101
args: fra rus devtest
metrics:
- name: BLEU
type: bleu
value: 25.8
- task:
name: Translation fra-ukr
type: translation
args: fra-ukr
dataset:
name: flores101-devtest
type: flores_101
args: fra ukr devtest
metrics:
- name: BLEU
type: bleu
value: 23.1
- task:
name: Translation fra-bel
type: translation
args: fra-bel
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fra-bel
metrics:
- name: BLEU
type: bleu
value: 31.1
- task:
name: Translation fra-rus
type: translation
args: fra-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fra-rus
metrics:
- name: BLEU
type: bleu
value: 46.1
- task:
name: Translation fra-ukr
type: translation
args: fra-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: fra-ukr
metrics:
- name: BLEU
type: bleu
value: 39.9
- task:
name: Translation fra-rus
type: translation
args: fra-rus
dataset:
name: newstest2012
type: wmt-2012-news
args: fra-rus
metrics:
- name: BLEU
type: bleu
value: 23.1
- task:
name: Translation fra-rus
type: translation
args: fra-rus
dataset:
name: newstest2013
type: wmt-2013-news
args: fra-rus
metrics:
- name: BLEU
type: bleu
value: 24.8
---
# opus-mt-tc-big-fr-zle
Neural machine translation model for translating from French (fr) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): fra
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT fra-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Ils ont acheté un très bon appareil photo.",
">>ukr<< Il s'est soudain mis à pleuvoir."
]
model_name = "pytorch-models/opus-mt-tc-big-fr-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Они купили очень хорошую камеру.
# Раптом почався дощ.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-zle")
print(pipe(">>rus<< Ils ont acheté un très bon appareil photo."))
# expected output: Они купили очень хорошую камеру.
```
## Benchmarks
* test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| fra-bel | tatoeba-test-v2021-08-07 | 0.52711 | 31.1 | 283 | 1703 |
| fra-rus | tatoeba-test-v2021-08-07 | 0.66502 | 46.1 | 11490 | 70123 |
| fra-ukr | tatoeba-test-v2021-08-07 | 0.61860 | 39.9 | 10035 | 54372 |
| fra-rus | flores101-devtest | 0.54106 | 25.8 | 1012 | 23295 |
| fra-ukr | flores101-devtest | 0.52733 | 23.1 | 1012 | 22810 |
| fra-rus | newstest2012 | 0.51254 | 23.1 | 3003 | 64790 |
| fra-rus | newstest2013 | 0.52342 | 24.8 | 3000 | 58560 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 02:05:04 EET 2022
* port machine: LM0-400-22516.local
|
Ryukijano/DialoGPT_med_model | c61938a84f438f07b7c4fd7eba86a1f4aeb7ac1a | 2022-03-24T13:14:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Ryukijano | null | Ryukijano/DialoGPT_med_model | 4 | null | transformers | 19,185 | Hello there , this bot is trained on DialoGTP for an epoch of 45 |
Helsinki-NLP/opus-mt-tc-base-ro-uk | da425340b48eaed8b02cc0937057b71ea83a41d1 | 2022-06-01T13:02:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ro",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-ro-uk | 4 | null | transformers | 19,186 | ---
language:
- ro
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-ro-uk
results:
- task:
name: Translation ron-ukr
type: translation
args: ron-ukr
dataset:
name: flores101-devtest
type: flores_101
args: ron ukr devtest
metrics:
- name: BLEU
type: bleu
value: 22.3
---
# opus-mt-tc-base-ro-uk
Neural machine translation model for translating from Romanian (ro) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s):
* target language(s):
* valid target language labels:
* model: transformer-align
* data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT ron-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-ukr/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>><<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Articolul exprimă opinia personală a autorului.",
"Ornitorincii trăiesc în estul Austriei."
]
model_name = "pytorch-models/opus-mt-tc-base-ro-uk"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Стаття висловлює особисту думку автора.
# Орніторінці живуть на сході Австрії.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-ro-uk")
print(pipe("Articolul exprimă opinia personală a autorului."))
# expected output: Стаття висловлює особисту думку автора.
```
## Benchmarks
* test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ron-ukr | flores101-devtest | 0.52391 | 22.3 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 03:30:40 EET 2022
* port machine: LM0-400-22516.local
|
Helsinki-NLP/opus-mt-tc-big-zls-zle | 6b6ea2e4cbec6c2a5fcf68860af05aeea38a3954 | 2022-06-01T13:04:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"bg",
"hr",
"ru",
"sh",
"sl",
"sr_Cyrl",
"sr_Latn",
"uk",
"zle",
"zls",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-zls-zle | 4 | null | transformers | 19,187 | ---
language:
- be
- bg
- hr
- ru
- sh
- sl
- sr_Cyrl
- sr_Latn
- uk
- zle
- zls
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-zls-zle
results:
- task:
name: Translation bul-rus
type: translation
args: bul-rus
dataset:
name: flores101-devtest
type: flores_101
args: bul rus devtest
metrics:
- name: BLEU
type: bleu
value: 24.6
- task:
name: Translation bul-ukr
type: translation
args: bul-ukr
dataset:
name: flores101-devtest
type: flores_101
args: bul ukr devtest
metrics:
- name: BLEU
type: bleu
value: 22.9
- task:
name: Translation hrv-rus
type: translation
args: hrv-rus
dataset:
name: flores101-devtest
type: flores_101
args: hrv rus devtest
metrics:
- name: BLEU
type: bleu
value: 23.5
- task:
name: Translation hrv-ukr
type: translation
args: hrv-ukr
dataset:
name: flores101-devtest
type: flores_101
args: hrv ukr devtest
metrics:
- name: BLEU
type: bleu
value: 21.9
- task:
name: Translation mkd-rus
type: translation
args: mkd-rus
dataset:
name: flores101-devtest
type: flores_101
args: mkd rus devtest
metrics:
- name: BLEU
type: bleu
value: 24.3
- task:
name: Translation mkd-ukr
type: translation
args: mkd-ukr
dataset:
name: flores101-devtest
type: flores_101
args: mkd ukr devtest
metrics:
- name: BLEU
type: bleu
value: 22.5
- task:
name: Translation slv-rus
type: translation
args: slv-rus
dataset:
name: flores101-devtest
type: flores_101
args: slv rus devtest
metrics:
- name: BLEU
type: bleu
value: 22.0
- task:
name: Translation slv-ukr
type: translation
args: slv-ukr
dataset:
name: flores101-devtest
type: flores_101
args: slv ukr devtest
metrics:
- name: BLEU
type: bleu
value: 20.2
- task:
name: Translation srp_Cyrl-rus
type: translation
args: srp_Cyrl-rus
dataset:
name: flores101-devtest
type: flores_101
args: srp_Cyrl rus devtest
metrics:
- name: BLEU
type: bleu
value: 25.7
- task:
name: Translation srp_Cyrl-ukr
type: translation
args: srp_Cyrl-ukr
dataset:
name: flores101-devtest
type: flores_101
args: srp_Cyrl ukr devtest
metrics:
- name: BLEU
type: bleu
value: 24.4
- task:
name: Translation bul-rus
type: translation
args: bul-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bul-rus
metrics:
- name: BLEU
type: bleu
value: 52.6
- task:
name: Translation bul-ukr
type: translation
args: bul-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bul-ukr
metrics:
- name: BLEU
type: bleu
value: 53.3
- task:
name: Translation hbs-rus
type: translation
args: hbs-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hbs-rus
metrics:
- name: BLEU
type: bleu
value: 58.5
- task:
name: Translation hbs-ukr
type: translation
args: hbs-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hbs-ukr
metrics:
- name: BLEU
type: bleu
value: 52.3
- task:
name: Translation hrv-ukr
type: translation
args: hrv-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: hrv-ukr
metrics:
- name: BLEU
type: bleu
value: 50.0
- task:
name: Translation slv-rus
type: translation
args: slv-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: slv-rus
metrics:
- name: BLEU
type: bleu
value: 27.3
- task:
name: Translation srp_Cyrl-rus
type: translation
args: srp_Cyrl-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Cyrl-rus
metrics:
- name: BLEU
type: bleu
value: 56.2
- task:
name: Translation srp_Cyrl-ukr
type: translation
args: srp_Cyrl-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Cyrl-ukr
metrics:
- name: BLEU
type: bleu
value: 51.8
- task:
name: Translation srp_Latn-rus
type: translation
args: srp_Latn-rus
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Latn-rus
metrics:
- name: BLEU
type: bleu
value: 60.1
- task:
name: Translation srp_Latn-ukr
type: translation
args: srp_Latn-ukr
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: srp_Latn-ukr
metrics:
- name: BLEU
type: bleu
value: 55.8
---
# opus-mt-tc-big-zls-zle
Neural machine translation model for translating from South Slavic languages (zls) to East Slavic languages (zle).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-23
* source language(s): bul hbs hrv slv srp_Cyrl srp_Latn
* target language(s): bel rus ukr
* valid target language labels: >>bel<< >>rus<< >>ukr<<
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.zip)
* more information released models: [OPUS-MT zls-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zle/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>rus<< Gdje je brigadir?",
">>ukr<< Zovem se Seli."
]
model_name = "pytorch-models/opus-mt-tc-big-zls-zle"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Где бригадир?
# Мене звати Саллі.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zls-zle")
print(pipe(">>rus<< Gdje je brigadir?"))
# expected output: Где бригадир?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zle/opusTCv20210807+bt_transformer-big_2022-03-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bul-rus | tatoeba-test-v2021-08-07 | 0.71467 | 52.6 | 1247 | 7870 |
| bul-ukr | tatoeba-test-v2021-08-07 | 0.71757 | 53.3 | 1020 | 4932 |
| hbs-rus | tatoeba-test-v2021-08-07 | 0.74593 | 58.5 | 2500 | 14213 |
| hbs-ukr | tatoeba-test-v2021-08-07 | 0.70244 | 52.3 | 942 | 4961 |
| hrv-ukr | tatoeba-test-v2021-08-07 | 0.68931 | 50.0 | 389 | 2232 |
| slv-rus | tatoeba-test-v2021-08-07 | 0.42255 | 27.3 | 657 | 4056 |
| srp_Cyrl-rus | tatoeba-test-v2021-08-07 | 0.74112 | 56.2 | 881 | 5117 |
| srp_Cyrl-ukr | tatoeba-test-v2021-08-07 | 0.68915 | 51.8 | 205 | 1061 |
| srp_Latn-rus | tatoeba-test-v2021-08-07 | 0.75340 | 60.1 | 1483 | 8311 |
| srp_Latn-ukr | tatoeba-test-v2021-08-07 | 0.73106 | 55.8 | 348 | 1668 |
| bul-rus | flores101-devtest | 0.54226 | 24.6 | 1012 | 23295 |
| bul-ukr | flores101-devtest | 0.53382 | 22.9 | 1012 | 22810 |
| hrv-rus | flores101-devtest | 0.51726 | 23.5 | 1012 | 23295 |
| hrv-ukr | flores101-devtest | 0.51011 | 21.9 | 1012 | 22810 |
| mkd-bel | flores101-devtest | 0.40885 | 10.7 | 1012 | 24829 |
| mkd-rus | flores101-devtest | 0.52509 | 24.3 | 1012 | 23295 |
| mkd-ukr | flores101-devtest | 0.52021 | 22.5 | 1012 | 22810 |
| slv-rus | flores101-devtest | 0.50349 | 22.0 | 1012 | 23295 |
| slv-ukr | flores101-devtest | 0.49156 | 20.2 | 1012 | 22810 |
| srp_Cyrl-rus | flores101-devtest | 0.53656 | 25.7 | 1012 | 23295 |
| srp_Cyrl-ukr | flores101-devtest | 0.53623 | 24.4 | 1012 | 22810 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 1bdabf7
* port time: Thu Mar 24 04:08:51 EET 2022
* port machine: LM0-400-22516.local
|
BogdanKuloren/vi_classification_eqhub_roberta | 7fcd94b3915657b0035179f31984652f407bd074 | 2022-03-24T16:57:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | BogdanKuloren | null | BogdanKuloren/vi_classification_eqhub_roberta | 4 | null | transformers | 19,188 | Entry not found |
yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-6 | 0461e7faf387585ebdfd18d208c0fcc005fac536 | 2022-03-25T04:40:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yy642 | null | yy642/bert-base-uncased-finetuned-mnli-max-length-256-epoch-6 | 4 | null | transformers | 19,189 | Entry not found |
agdsga/chinese-electra-large-discriminator-finetuned-ner | f76babcd5c96cd1c06a98be0fb3b7e2b75a83595 | 2022-03-25T08:31:30.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | agdsga | null | agdsga/chinese-electra-large-discriminator-finetuned-ner | 4 | null | transformers | 19,190 | Entry not found |
Graphcore/lxmert-vqa-uncased | 4287c49b3c917a174e0f973ba2db5a72ab1fb0e9 | 2022-05-25T18:29:27.000Z | [
"pytorch",
"lxmert",
"question-answering",
"dataset:Graphcore/vqa-lxmert",
"arxiv:1908.07490",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Graphcore | null | Graphcore/lxmert-vqa-uncased | 4 | null | transformers | 19,191 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- Graphcore/vqa-lxmert
metrics:
- accuracy
model-index:
- name: vqa
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: Graphcore/vqa-lxmert
type: Graphcore/vqa-lxmert
args: vqa
metrics:
- name: Accuracy
type: accuracy
value: 0.7242196202278137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Graphcore/lxmert-vqa-uncased
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modelling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA and GQA.
Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
## Intended uses & limitations
This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/vqa-lxmert](https://huggingface.co/datasets/Graphcore/vqa-lxmert) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 0.7242
## Training and evaluation data
- [Graphcore/vqa-lxmert](https://huggingface.co/datasets/Graphcore/vqa-lxmert) dataset
## Training procedure
Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore).
Command line:
```
python examples/question-answering/run_vqa.py \
--model_name_or_path unc-nlp/lxmert-base-uncased \
--ipu_config_name Graphcore/lxmert-base-ipu \
--dataset_name Graphcore/vqa-lxmert \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 1 \
--num_train_epochs 4 \
--dataloader_num_workers 64 \
--logging_steps 5 \
--learning_rate 5e-5 \
--lr_scheduler_type linear \
--loss_scaling 16384 \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--output_dir /tmp/vqa/ \
--dataloader_drop_last \
--replace_qa_head \
--pod_type pod16
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: IPU
- total_train_batch_size: 64
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4.0
- training precision: Mixed Precision
### Training results
```
***** train metrics *****
"epoch": 4.0,
"train_loss": 0.0060005393999575125,
"train_runtime": 13854.802,
"train_samples": 443757,
"train_samples_per_second": 128.116,
"train_steps_per_second": 2.002
***** eval metrics *****
"eval_accuracy": 0.7242196202278137,
"eval_loss": 0.0008745193481445312,
"eval_samples": 214354,
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
world-wide/is-legit-kwd-march-27 | a48b7485b15d147474a41a693c42ede59637cc9d | 2022-03-26T18:44:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:bozelosp/autotrain-data-legit-keyword",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | world-wide | null | world-wide/is-legit-kwd-march-27 | 4 | 1 | transformers | 19,192 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bozelosp/autotrain-data-legit-keyword
co2_eq_emissions: 0.5745216001459987
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 668419758
- CO2 Emissions (in grams): 0.5745216001459987
## Validation Metrics
- Loss: 0.5012844800949097
- Accuracy: 0.8057228915662651
- Precision: 0.7627627627627628
- Recall: 0.8355263157894737
- AUC: 0.868530701754386
- F1: 0.7974882260596545
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bozelosp/autotrain-legit-keyword-668419758
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bozelosp/autotrain-legit-keyword-668419758", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bozelosp/autotrain-legit-keyword-668419758", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
YXHugging/autotrain-xlm-roberta-base-reviews-672119797 | cf5eec7f478f0ee3c57b3c5e90fbfcac269efd8b | 2022-03-27T12:55:19.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | YXHugging | null | YXHugging/autotrain-xlm-roberta-base-reviews-672119797 | 4 | null | transformers | 19,193 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1019.0229633198007
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119797
- CO2 Emissions (in grams): 1019.0229633198007
## Validation Metrics
- Loss: 0.9898674488067627
- Accuracy: 0.5688083333333334
- Macro F1: 0.5640966271895913
- Micro F1: 0.5688083333333334
- Weighted F1: 0.5640966271895913
- Macro Precision: 0.5673737438011194
- Micro Precision: 0.5688083333333334
- Weighted Precision: 0.5673737438011194
- Macro Recall: 0.5688083333333334
- Micro Recall: 0.5688083333333334
- Weighted Recall: 0.5688083333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119797
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
YXHugging/autotrain-xlm-roberta-base-reviews-672119798 | 1e68b1f385471c49b167c8b8dbf462e53f14c65d | 2022-03-27T12:58:03.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | YXHugging | null | YXHugging/autotrain-xlm-roberta-base-reviews-672119798 | 4 | null | transformers | 19,194 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1013.8825767332373
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119798
- CO2 Emissions (in grams): 1013.8825767332373
## Validation Metrics
- Loss: 0.9646632075309753
- Accuracy: 0.5789333333333333
- Macro F1: 0.5775792001871465
- Micro F1: 0.5789333333333333
- Weighted F1: 0.5775792001871465
- Macro Precision: 0.5829444191847423
- Micro Precision: 0.5789333333333333
- Weighted Precision: 0.5829444191847424
- Macro Recall: 0.5789333333333333
- Micro Recall: 0.5789333333333333
- Weighted Recall: 0.5789333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119798
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
YXHugging/autotrain-xlm-roberta-base-reviews-672119799 | ef3497dbeffd7afa3d5e5eae08851466fc1972d0 | 2022-03-28T01:30:54.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | YXHugging | null | YXHugging/autotrain-xlm-roberta-base-reviews-672119799 | 4 | null | transformers | 19,195 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1583.7188188958198
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119799
- CO2 Emissions (in grams): 1583.7188188958198
## Validation Metrics
- Loss: 0.9590993523597717
- Accuracy: 0.5827541666666667
- Macro F1: 0.5806748283026683
- Micro F1: 0.5827541666666667
- Weighted F1: 0.5806748283026683
- Macro Precision: 0.5834325027348383
- Micro Precision: 0.5827541666666667
- Weighted Precision: 0.5834325027348383
- Macro Recall: 0.5827541666666667
- Micro Recall: 0.5827541666666667
- Weighted Recall: 0.5827541666666667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119799
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
YXHugging/autotrain-xlm-roberta-base-reviews-672119800 | bdd53ccbbc402f3780d3c9e9e0b087c7d02e0590 | 2022-03-28T08:18:33.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | YXHugging | null | YXHugging/autotrain-xlm-roberta-base-reviews-672119800 | 4 | null | transformers | 19,196 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 2011.6528745969179
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119800
- CO2 Emissions (in grams): 2011.6528745969179
## Validation Metrics
- Loss: 0.9570887088775635
- Accuracy: 0.5830708333333333
- Macro F1: 0.5789149828346194
- Micro F1: 0.5830708333333333
- Weighted F1: 0.5789149828346193
- Macro Precision: 0.5808338093704437
- Micro Precision: 0.5830708333333333
- Weighted Precision: 0.5808338093704437
- Macro Recall: 0.5830708333333334
- Micro Recall: 0.5830708333333333
- Weighted Recall: 0.5830708333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119800
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119800", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119800", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
YXHugging/autotrain-xlm-roberta-base-reviews-672119801 | c644154fbedc7fab284a5f066e9ae052e7dd20b6 | 2022-03-27T16:53:50.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | YXHugging | null | YXHugging/autotrain-xlm-roberta-base-reviews-672119801 | 4 | null | transformers | 19,197 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 999.5670927087938
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119801
- CO2 Emissions (in grams): 999.5670927087938
## Validation Metrics
- Loss: 0.9767692685127258
- Accuracy: 0.5738333333333333
- Macro F1: 0.5698748846905103
- Micro F1: 0.5738333333333333
- Weighted F1: 0.5698748846905102
- Macro Precision: 0.5734242161804903
- Micro Precision: 0.5738333333333333
- Weighted Precision: 0.5734242161804902
- Macro Recall: 0.5738333333333333
- Micro Recall: 0.5738333333333333
- Weighted Recall: 0.5738333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119801
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Wiam/distilbert-base-uncased-finetuned-squad | cd1a795ae68821882d80bc0955ad92ec38a381d6 | 2022-03-27T03:05:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Wiam | null | Wiam/distilbert-base-uncased-finetuned-squad | 4 | null | transformers | 19,198 | Entry not found |
Splend1dchan/t5small4-squad1024 | 0bb48bcc7fc1a6904754d788a88d0d71036c8571 | 2022-03-27T22:26:42.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Splend1dchan | null | Splend1dchan/t5small4-squad1024 | 4 | null | transformers | 19,199 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5small4-squad1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5small4-squad1024
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.9.0+cu102
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.