modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Akashpb13/Galician_xlsr | 6d7c65bc6ee00db4b0dceab044affedd4ea486b5 | 2022-03-24T11:56:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/Galician_xlsr | 18 | null | transformers | 8,700 | ---
language:
- gl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- gl
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/Galician_xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: kmr
metrics:
- name: Test WER
type: wer
value: 0.11308483789555426
- name: Test CER
type: cer
value: 0.023982371794871796
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: gl
metrics:
- name: Test WER
type: wer
value: 0.11308483789555426
- name: Test CER
type: cer
value: 0.023982371794871796
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: gl
metrics:
- name: Test WER
type: wer
value: 11.31
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: gl
metrics:
- name: Test WER
type: wer
value: 39.05
---
# Akashpb13/Galician_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.137096
- Wer: 0.196230
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Galician train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.038100 | 3.035432 | 1.000000 |
| 1000 | 2.180000 | 0.406300 | 0.557964 |
| 1500 | 0.331700 | 0.153797 | 0.262394 |
| 2000 | 0.171600 | 0.145268 | 0.235627 |
| 2500 | 0.125900 | 0.136622 | 0.228087 |
| 3000 | 0.105400 | 0.131650 | 0.224128 |
| 3500 | 0.087600 | 0.141032 | 0.217531 |
| 4000 | 0.078300 | 0.143675 | 0.214515 |
| 4500 | 0.070000 | 0.144607 | 0.208106 |
| 5000 | 0.061500 | 0.135259 | 0.202828 |
| 5500 | 0.055600 | 0.130638 | 0.203959 |
| 6000 | 0.050500 | 0.137416 | 0.202451 |
| 6500 | 0.046600 | 0.140379 | 0.200000 |
| 7000 | 0.040800 | 0.140179 | 0.200377 |
| 7500 | 0.041000 | 0.138089 | 0.196795 |
| 8000 | 0.038400 | 0.136927 | 0.197172 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Galician_xlsr --dataset mozilla-foundation/common_voice_8_0 --config gl --split test
```
|
AlexMaclean/sentence-compression-roberta | 79c877c8f5a67df3bfe4990da73e290c733134cf | 2021-12-06T04:22:17.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | AlexMaclean | null | AlexMaclean/sentence-compression-roberta | 18 | 1 | transformers | 8,701 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentence-compression-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence-compression-roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3465
- Accuracy: 0.8473
- F1: 0.6835
- Precision: 0.6835
- Recall: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 |
| 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 |
| 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | d1072d4b81dccf65cebc2280f26b518fb474c460 | 2021-10-17T12:09:56.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | 18 | null | transformers | 8,702 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู [SEP] ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'
---
# CAMeLBERT-DA Poetry Classification Model
## Model description
**CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry')
>>> # A list of verses where each verse consists of two parts.
>>> verses = [
['ุงูุฎูู ูุงูููู ูุงูุจูุฏุงุก ุชุนุฑููู' ,'ูุงูุณูู ูุงูุฑู
ุญ ูุงููุฑุทุงุณ ูุงูููู
'],
['ูู
ููู
ุนูู
ููู ุงูุชุจุฌููุง' ,'ูุงุฏ ุงูู
ุนูู
ุงู ูููู ุฑุณููุง']
]
>>> # A function that concatenates the halves of each verse by using the [SEP] token.
>>> join_verse = lambda half: ' [SEP] '.join(half)
>>> # Apply this to all the verses in the list.
>>> verses = [join_verse(verse) for verse in verses]
>>> poetry(sentences)
[{'label': 'ุงูุจุณูุท', 'score': 0.9874765276908875},
{'label': 'ุงูุณูุณูุฉ', 'score': 0.6877778172492981}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Cinnamon/electra-small-japanese-generator | f74cb40569de2648344639c51b4969b230523ea1 | 2020-12-11T21:26:17.000Z | [
"pytorch",
"electra",
"fill-mask",
"ja",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Cinnamon | null | Cinnamon/electra-small-japanese-generator | 18 | 1 | transformers | 8,703 | ---
language: ja
---
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
```
# ELECTRA-small generator usage
from transformers import BertJapaneseTokenizer, ElectraForMaskedLM
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-generator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForMaskedLM.from_pretrained('Cinnamon/electra-small-japanese-generator')
```
|
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 | 076b731294cfae58998b40daf54d4595a2667fb0 | 2022-03-23T18:30:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"bg",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 | 18 | 1 | transformers | 8,704 | ---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- bg
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-bg-d2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: bg
metrics:
- name: Test WER
type: wer
value: 0.28775471338792613
- name: Test CER
type: cer
value: 0.06861971204625049
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 0.49783147459727384
- name: Test CER
type: cer
value: 0.1591062599627158
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 51.25
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bg-d2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3421
- Wer: 0.2860
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.8791 | 1.74 | 200 | 3.1902 | 1.0 |
| 3.0441 | 3.48 | 400 | 2.8098 | 0.9864 |
| 1.1499 | 5.22 | 600 | 0.4668 | 0.5014 |
| 0.4968 | 6.96 | 800 | 0.4162 | 0.4472 |
| 0.3553 | 8.7 | 1000 | 0.3580 | 0.3777 |
| 0.3027 | 10.43 | 1200 | 0.3422 | 0.3506 |
| 0.2562 | 12.17 | 1400 | 0.3556 | 0.3639 |
| 0.2272 | 13.91 | 1600 | 0.3621 | 0.3583 |
| 0.2125 | 15.65 | 1800 | 0.3436 | 0.3358 |
| 0.1904 | 17.39 | 2000 | 0.3650 | 0.3545 |
| 0.1695 | 19.13 | 2200 | 0.3366 | 0.3241 |
| 0.1532 | 20.87 | 2400 | 0.3550 | 0.3311 |
| 0.1453 | 22.61 | 2600 | 0.3582 | 0.3131 |
| 0.1359 | 24.35 | 2800 | 0.3524 | 0.3084 |
| 0.1233 | 26.09 | 3000 | 0.3503 | 0.2973 |
| 0.1114 | 27.83 | 3200 | 0.3434 | 0.2946 |
| 0.1051 | 29.57 | 3400 | 0.3474 | 0.2956 |
| 0.0965 | 31.3 | 3600 | 0.3426 | 0.2907 |
| 0.0923 | 33.04 | 3800 | 0.3478 | 0.2894 |
| 0.0894 | 34.78 | 4000 | 0.3421 | 0.2860 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
EasthShin/Android_Ios_Classification | 4058e1b08c146f9f2fc5ed0e64120d728cae1466 | 2021-08-22T16:18:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | EasthShin | null | EasthShin/Android_Ios_Classification | 18 | null | transformers | 8,705 | ## Bert-base-uncased for Android-Ios Question Classification
**Code**: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/EastHShin/Android-Ios-Classification-Workspace)
<br>
**Android-Ios-Classification DEMO**: [Ainize Endpoint](https://main-android-ios-classification-east-h-shin.endpoint.ainize.ai/)
<br>
**Demo web Code**: [Github](https://github.com/EastHShin/Android-Ios-Classification)
<br>
**Android-Ios-Classification API**: [Ainize API](https://ainize.ai/EastHShin/Android-Ios-Classification)
<br>
<br>
## Overview
**Language model**: bert-base-cased
<br>
**Language**: English
<br>
**Training data**: Question classification Android-Ios dataset from [Kaggle](https://www.kaggle.com/xhlulu/question-classification-android-or-ios)
## Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_path = "EasthShin/Android_Ios_Classification"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
classifier = pipeline('text-classification', model=model_path, tokenizer=tokenizer)
question = "I bought goodnote in Appstore"
result = dict()
result[0] = classifier(question)[0]
``` |
EhsanAghazadeh/xlnet-large-cased-CoLA_C | b710afe83a3a8203502d65a5efef6741b6b8021b | 2021-04-18T18:42:36.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlnet-large-cased-CoLA_C | 18 | null | transformers | 8,706 | Entry not found |
Harveenchadha/wav2vec2-pretrained-clsril-23-10k | 026bd5f2c194197e032c91940b88fdc71455aad8 | 2021-08-06T13:40:49.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"arxiv:2107.07402",
"transformers"
] | feature-extraction | false | Harveenchadha | null | Harveenchadha/wav2vec2-pretrained-clsril-23-10k | 18 | 2 | transformers | 8,707 | ## Overview
We present a CLSRIL-23 (Cross Lingual Speech Representations on Indic Languages), a self supervised learning based audio pre-trained model which learns cross
lingual speech representations from raw audio across **23 Indic languages**. It is built on top of wav2vec
2.0 which is solved by training a contrastive task over masked latent speech representations and
jointly learns the quantization of latents shared across all languages.
[Arxiv Link](https://arxiv.org/pdf/2107.07402.pdf)
[Original Repo](https://github.com/Open-Speech-EkStep/vakyansh-models) contains models in fairseq format.
## Languages in the pretraining dataset
| Language | Data (In Hrs) |
|-----------|---------------|
| Assamese | 254.9 |
| Bengali | 331.3 |
| Bodo | 26.9 |
| Dogri | 17.1 |
| English | 819.7 |
| Gujarati | 336.7 |
| Hindi | 4563.7 |
| Kannada | 451.8 |
| Kashmiri | 67.8 |
| Konkani | 36.8 |
| Maithili | 113.8 |
| Malayalam | 297.7 |
| Manipuri | 171.9 |
| Marathi | 458.2 |
| Nepali | 31.6 |
| Odia | 131.4 |
| Punjabi | 486.05 |
| Sanskrit | 58.8 |
| Santali | 6.56 |
| Sindhi | 16 |
| Tamil | 542.6 |
| Telugu | 302.8 |
| Urdu | 259.68 |
## Repo for training:
[Experimentation](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation) platform built on top of fairseq.
|
Hate-speech-CNERG/dehatebert-mono-italian | aeb70b454d5fc3046aa2a062c525d1ac60f2f01b | 2021-09-25T13:56:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"it",
"arxiv:2004.06465",
"transformers",
"license:apache-2.0"
] | text-classification | false | Hate-speech-CNERG | null | Hate-speech-CNERG/dehatebert-mono-italian | 18 | null | transformers | 8,708 | ---
language: it
license: apache-2.0
---
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model.
The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT)
### For more details about our paper
Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{aluru2020deep,
title={Deep Learning Models for Multilingual Hate Speech Detection},
author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2004.06465},
year={2020}
}
~~~
|
Helsinki-NLP/opus-mt-be-es | fb10b7c6cd82bc7662ce52f2986f927579483fad | 2021-01-18T07:49:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-be-es | 18 | null | transformers | 8,709 | ---
language:
- be
- es
tags:
- translation
license: apache-2.0
---
### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md)
* model: transformer-align
* source language(s): bel bel_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel.spa | 11.8 | 0.272 |
### System Info:
- hf_name: bel-spa
- source_languages: bel
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'es']
- src_constituents: {'bel', 'bel_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt
- src_alpha3: bel
- tgt_alpha3: spa
- short_pair: be-es
- chrF2_score: 0.272
- bleu: 11.8
- brevity_penalty: 0.892
- ref_len: 1412.0
- src_name: Belarusian
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: be
- tgt_alpha2: es
- prefer_old: False
- long_pair: bel-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-de | 0ba8e70435e98ce0a17866dbf8c3906b1c13a8d7 | 2021-01-18T07:52:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ca",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ca-de | 18 | null | transformers | 8,710 | ---
language:
- ca
- de
tags:
- translation
license: apache-2.0
---
### cat-deu
* source group: Catalan
* target group: German
* OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.deu | 39.5 | 0.593 |
### System Info:
- hf_name: cat-deu
- source_languages: cat
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'de']
- src_constituents: {'cat'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: deu
- short_pair: ca-de
- chrF2_score: 0.593
- bleu: 39.5
- brevity_penalty: 1.0
- ref_len: 5643.0
- src_name: Catalan
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: de
- prefer_old: False
- long_pair: cat-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-da-de | 2e4d10f7054f579178b167e5082b0e57726eee44 | 2021-09-09T21:29:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-de | 18 | null | transformers | 8,711 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-da-de
* source languages: da
* target languages: de
* OPUS readme: [da-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-de/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.de | 57.4 | 0.740 |
|
Helsinki-NLP/opus-mt-en-chk | a57e025c3f8a7a9b20968190b6a6db234ef1541a | 2021-09-09T21:34:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"chk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-chk | 18 | null | transformers | 8,712 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-chk
* source languages: en
* target languages: chk
* OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.chk | 26.1 | 0.468 |
|
Helsinki-NLP/opus-mt-en-crs | 1f25af1f9d1c0680005a9f0d16ed8bb412784c32 | 2021-09-09T21:34:38.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"crs",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-crs | 18 | null | transformers | 8,713 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-crs
* source languages: en
* target languages: crs
* OPUS readme: [en-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.crs | 45.2 | 0.617 |
|
Helsinki-NLP/opus-mt-en-luo | fafd6071295dbf194acd2bf04cf51f4e46b9f10b | 2021-09-09T21:37:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"luo",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-luo | 18 | 1 | transformers | 8,714 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-luo
* source languages: en
* target languages: luo
* OPUS readme: [en-luo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-luo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.luo | 27.6 | 0.495 |
|
Helsinki-NLP/opus-mt-en-pon | 78431adc00a85251bad917dd0d99f57b7dff5519 | 2021-09-09T21:38:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"pon",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-pon | 18 | null | transformers | 8,715 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-pon
* source languages: en
* target languages: pon
* OPUS readme: [en-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pon | 32.4 | 0.542 |
|
Helsinki-NLP/opus-mt-en-run | 71f1ba7d823772630debcf2664556316b29c4bc7 | 2021-09-09T21:38:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"run",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-run | 18 | null | transformers | 8,716 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-run
* source languages: en
* target languages: run
* OPUS readme: [en-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.run | 34.2 | 0.591 |
|
Helsinki-NLP/opus-mt-en-sn | 8270891c929d30483217b2dd31cd3784b4863da9 | 2021-09-09T21:39:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sn",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sn | 18 | null | transformers | 8,717 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sn
* source languages: en
* target languages: sn
* OPUS readme: [en-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sn | 38.0 | 0.646 |
|
Helsinki-NLP/opus-mt-en-ss | bec263f6023f89a296c8ac5b345772709a8587ad | 2021-09-09T21:39:20.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ss",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ss | 18 | null | transformers | 8,718 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ss
* source languages: en
* target languages: ss
* OPUS readme: [en-ss](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ss/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ss | 25.7 | 0.541 |
|
Helsinki-NLP/opus-mt-en-swc | d94d46a6b644d279595a3002622e491682a8658d | 2021-09-09T21:39:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"swc",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-swc | 18 | null | transformers | 8,719 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-swc
* source languages: en
* target languages: swc
* OPUS readme: [en-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.swc | 40.1 | 0.613 |
|
Helsinki-NLP/opus-mt-en-tvl | 515d37d27b5e9781bad9c809e501f68773824d0f | 2021-09-09T21:40:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"tvl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-tvl | 18 | null | transformers | 8,720 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-tvl
* source languages: en
* target languages: tvl
* OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tvl | 46.9 | 0.625 |
|
Helsinki-NLP/opus-mt-es-NORWAY | 3c5425e7514f9f47f9822d5947ac5f56d68b572c | 2021-09-09T21:41:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"no",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-NORWAY | 18 | null | transformers | 8,721 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-NORWAY
* source languages: es
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [es-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.no | 31.6 | 0.523 |
|
Helsinki-NLP/opus-mt-es-lus | 6a8ac408bcb84e553747298b5ae96986398f6e85 | 2021-09-09T21:43:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"lus",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-lus | 18 | null | transformers | 8,722 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-lus
* source languages: es
* target languages: lus
* OPUS readme: [es-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lus | 20.9 | 0.414 |
|
Helsinki-NLP/opus-mt-fr-he | fd07a640906ea642940eefaf7f5b07fae013ba63 | 2021-01-18T08:43:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-he | 18 | null | transformers | 8,723 | ---
language:
- fr
- he
tags:
- translation
license: apache-2.0
---
### fr-he
* source group: French
* target group: Hebrew
* OPUS readme: [fra-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md)
* model: transformer
* source language(s): fra
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.heb | 39.2 | 0.598 |
### System Info:
- hf_name: fr-he
- source_languages: fra
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'he']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt
- src_alpha3: fra
- tgt_alpha3: heb
- chrF2_score: 0.598
- bleu: 39.2
- brevity_penalty: 1.0
- ref_len: 20655.0
- src_name: French
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: fr
- tgt_alpha2: he
- prefer_old: False
- short_pair: fr-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 |
Helsinki-NLP/opus-mt-fr-to | 3ad564b525751784417060ca0c2e1b3d170cd52c | 2021-09-09T21:57:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"to",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-to | 18 | null | transformers | 8,724 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-to
* source languages: fr
* target languages: to
* OPUS readme: [fr-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.to | 37.0 | 0.518 |
|
Helsinki-NLP/opus-mt-it-ms | c954aae9852f40ee4d8ede1d14fa06b36dd95c36 | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-ms | 18 | null | transformers | 8,725 | ---
language:
- it
- ms
tags:
- translation
license: apache-2.0
---
### ita-msa
* source group: Italian
* target group: Malay (macrolanguage)
* OPUS readme: [ita-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-msa/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.msa | 26.0 | 0.536 |
### System Info:
- hf_name: ita-msa
- source_languages: ita
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ms']
- src_constituents: {'ita'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.test.txt
- src_alpha3: ita
- tgt_alpha3: msa
- short_pair: it-ms
- chrF2_score: 0.536
- bleu: 26.0
- brevity_penalty: 0.9209999999999999
- ref_len: 2765.0
- src_name: Italian
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: it
- tgt_alpha2: ms
- prefer_old: False
- long_pair: ita-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-lue-en | 1780ade95cbfcd12acec3e3f67218312e3c35ab9 | 2021-09-10T13:56:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lue",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lue-en | 18 | null | transformers | 8,726 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lue-en
* source languages: lue
* target languages: en
* OPUS readme: [lue-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.en | 31.7 | 0.469 |
|
Helsinki-NLP/opus-mt-sla-sla | ca62c5189ed8e6593f101da91fe2aadb9bd57f51 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sla-sla | 18 | null | transformers | 8,727 | ---
language:
- be
- hr
- mk
- cs
- ru
- pl
- bg
- uk
- sl
- sla
tags:
- translation
license: apache-2.0
---
### sla-sla
* source group: Slavic languages
* target group: Slavic languages
* OPUS readme: [sla-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-cesrus.ces.rus | 15.9 | 0.437 |
| newstest2012-rusces.rus.ces | 13.6 | 0.403 |
| newstest2013-cesrus.ces.rus | 19.8 | 0.473 |
| newstest2013-rusces.rus.ces | 17.9 | 0.449 |
| Tatoeba-test.bel-bul.bel.bul | 100.0 | 1.000 |
| Tatoeba-test.bel-ces.bel.ces | 33.5 | 0.630 |
| Tatoeba-test.bel-hbs.bel.hbs | 45.4 | 0.644 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-pol.bel.pol | 46.9 | 0.681 |
| Tatoeba-test.bel-rus.bel.rus | 58.5 | 0.767 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.1 | 0.743 |
| Tatoeba-test.bul-bel.bul.bel | 10.7 | 0.423 |
| Tatoeba-test.bul-ces.bul.ces | 36.9 | 0.585 |
| Tatoeba-test.bul-hbs.bul.hbs | 53.7 | 0.807 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.715 |
| Tatoeba-test.bul-pol.bul.pol | 38.6 | 0.607 |
| Tatoeba-test.bul-rus.bul.rus | 44.8 | 0.655 |
| Tatoeba-test.bul-ukr.bul.ukr | 49.9 | 0.691 |
| Tatoeba-test.ces-bel.ces.bel | 30.9 | 0.585 |
| Tatoeba-test.ces-bul.ces.bul | 75.8 | 0.859 |
| Tatoeba-test.ces-hbs.ces.hbs | 50.0 | 0.661 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.9 | 0.246 |
| Tatoeba-test.ces-mkd.ces.mkd | 24.6 | 0.569 |
| Tatoeba-test.ces-pol.ces.pol | 44.3 | 0.652 |
| Tatoeba-test.ces-rus.ces.rus | 50.8 | 0.690 |
| Tatoeba-test.ces-slv.ces.slv | 4.9 | 0.240 |
| Tatoeba-test.ces-ukr.ces.ukr | 52.9 | 0.687 |
| Tatoeba-test.dsb-pol.dsb.pol | 16.3 | 0.367 |
| Tatoeba-test.dsb-rus.dsb.rus | 12.7 | 0.245 |
| Tatoeba-test.hbs-bel.hbs.bel | 32.9 | 0.531 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 40.3 | 0.626 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.3 | 0.535 |
| Tatoeba-test.hbs-pol.hbs.pol | 45.0 | 0.650 |
| Tatoeba-test.hbs-rus.hbs.rus | 53.5 | 0.709 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 50.7 | 0.684 |
| Tatoeba-test.hsb-ces.hsb.ces | 17.9 | 0.366 |
| Tatoeba-test.mkd-bel.mkd.bel | 23.6 | 0.548 |
| Tatoeba-test.mkd-bul.mkd.bul | 54.2 | 0.833 |
| Tatoeba-test.mkd-ces.mkd.ces | 12.1 | 0.371 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 19.3 | 0.577 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-rus.mkd.rus | 34.2 | 0.745 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.708 |
| Tatoeba-test.multi.multi | 48.5 | 0.672 |
| Tatoeba-test.orv-pol.orv.pol | 10.1 | 0.355 |
| Tatoeba-test.orv-rus.orv.rus | 10.6 | 0.275 |
| Tatoeba-test.orv-ukr.orv.ukr | 7.5 | 0.230 |
| Tatoeba-test.pol-bel.pol.bel | 29.8 | 0.533 |
| Tatoeba-test.pol-bul.pol.bul | 36.8 | 0.578 |
| Tatoeba-test.pol-ces.pol.ces | 43.6 | 0.626 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.9 | 0.097 |
| Tatoeba-test.pol-hbs.pol.hbs | 42.4 | 0.644 |
| Tatoeba-test.pol-mkd.pol.mkd | 19.3 | 0.535 |
| Tatoeba-test.pol-orv.pol.orv | 0.7 | 0.109 |
| Tatoeba-test.pol-rus.pol.rus | 49.6 | 0.680 |
| Tatoeba-test.pol-slv.pol.slv | 7.3 | 0.262 |
| Tatoeba-test.pol-ukr.pol.ukr | 46.8 | 0.664 |
| Tatoeba-test.rus-bel.rus.bel | 34.4 | 0.577 |
| Tatoeba-test.rus-bul.rus.bul | 45.5 | 0.657 |
| Tatoeba-test.rus-ces.rus.ces | 48.0 | 0.659 |
| Tatoeba-test.rus-dsb.rus.dsb | 10.7 | 0.029 |
| Tatoeba-test.rus-hbs.rus.hbs | 44.6 | 0.655 |
| Tatoeba-test.rus-mkd.rus.mkd | 34.9 | 0.617 |
| Tatoeba-test.rus-orv.rus.orv | 0.1 | 0.073 |
| Tatoeba-test.rus-pol.rus.pol | 45.2 | 0.659 |
| Tatoeba-test.rus-slv.rus.slv | 30.4 | 0.476 |
| Tatoeba-test.rus-ukr.rus.ukr | 57.6 | 0.751 |
| Tatoeba-test.slv-ces.slv.ces | 42.5 | 0.604 |
| Tatoeba-test.slv-pol.slv.pol | 39.6 | 0.601 |
| Tatoeba-test.slv-rus.slv.rus | 47.2 | 0.638 |
| Tatoeba-test.slv-ukr.slv.ukr | 36.4 | 0.549 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.9 | 0.597 |
| Tatoeba-test.ukr-bul.ukr.bul | 56.4 | 0.733 |
| Tatoeba-test.ukr-ces.ukr.ces | 52.1 | 0.686 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 47.1 | 0.670 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 20.8 | 0.548 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.2 | 0.058 |
| Tatoeba-test.ukr-pol.ukr.pol | 50.1 | 0.695 |
| Tatoeba-test.ukr-rus.ukr.rus | 63.9 | 0.790 |
| Tatoeba-test.ukr-slv.ukr.slv | 14.5 | 0.288 |
### System Info:
- hf_name: sla-sla
- source_languages: sla
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt
- src_alpha3: sla
- tgt_alpha3: sla
- short_pair: sla-sla
- chrF2_score: 0.672
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 59320.0
- src_name: Slavic languages
- tgt_name: Slavic languages
- train_date: 2020-07-27
- src_alpha2: sla
- tgt_alpha2: sla
- prefer_old: False
- long_pair: sla-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-el | 2cd4e62c20d1e0003c3043f60301f4da8fb23a3d | 2021-09-10T14:06:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-el | 18 | null | transformers | 8,728 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-el
* source languages: sv
* target languages: el
* OPUS readme: [sv-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-el/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.sv.el | 20.8 | 0.456 |
|
Helsinki-NLP/opus-mt-toi-en | 5e7d9737899431120886a54d998cc37240c24c06 | 2021-09-11T10:49:09.000Z | [
"pytorch",
"marian",
"text2text-generation",
"toi",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-toi-en | 18 | null | transformers | 8,729 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-toi-en
* source languages: toi
* target languages: en
* OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.en | 39.0 | 0.539 |
|
ImAPizza/DialoGPT-medium-albert | 20a574e650fd97c3dcf8d03a0a880285bd437265 | 2021-08-29T11:59:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ImAPizza | null | ImAPizza/DialoGPT-medium-albert | 18 | null | transformers | 8,730 | ---
tags:
- conversational
---
# Albert DialoGPT Model |
Ivo/emscad-skill-extraction-conference | 2fedd4e5d2ba4e620e8e1c797faa61b343d83e17 | 2021-06-15T07:59:57.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Ivo | null | Ivo/emscad-skill-extraction-conference | 18 | null | transformers | 8,731 | Entry not found |
Kowsher/model-bangla-bert | 11dc9ea77a22f46d65720b5e96beb4b96b19eb67 | 2021-07-05T16:31:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Kowsher | null | Kowsher/model-bangla-bert | 18 | 1 | transformers | 8,732 | Entry not found |
LuisG07/wav2vec2-large-xlsr-53-spanish | af2780e93694b39e273467d3fd6e4ae7c824af1f | 2022-04-22T08:38:35.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"transformers",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | LuisG07 | null | LuisG07/wav2vec2-large-xlsr-53-spanish | 18 | null | transformers | 8,733 | ---
language: es
license: apache-2.0
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- es
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
model-index:
- name: XLSR Wav2Vec2 Spanish by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice es
type: common_voice
args: es
metrics:
- name: Test WER
type: wer
value: 8.82
- name: Test CER
type: cer
value: 2.58
- name: Test WER (+LM)
type: wer
value: 6.27
- name: Test CER (+LM)
type: cer
value: 2.06
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Dev WER
type: wer
value: 30.19
- name: Dev CER
type: cer
value: 13.56
- name: Dev WER (+LM)
type: wer
value: 24.71
- name: Dev CER (+LM)
type: cer
value: 12.61
---
# Wav2Vec2-Large-XLSR-53-Spanish
Added custom language model to https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
```python
from asrecognition import ASREngine
asr = ASREngine("es", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-spanish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "es"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS |
| OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN |
| PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIรN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIรN |
| TRES | TRES |
| REALIZร LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAรA. | REALIZร LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAรA |
| EN LOS AรOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AรOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES |
| SE ESTร TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTร TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS |
| Sร | Sร |
| "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESร." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESร |
| SE UBICAN ESPECรFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECรFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PรOCOSUR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-spanish,
title={XLSR Wav2Vec2 Spanish by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}},
year={2021}
}
``` |
M-CLIP/Swedish-500k | 0587c780bd4f7ac78b13b767d8fe12de500e8311 | 2021-05-18T21:36:48.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | M-CLIP | null | M-CLIP/Swedish-500k | 18 | null | transformers | 8,734 | <br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['รlgen รคr skogens konung!', 'Alla isbjรถrnar รคr vรคnsterhรคnta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
|
Maaly/body-site | 9a18bcd3b1508a17c86d9ed46bb153f7b42adacd | 2022-05-28T15:32:07.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Maaly | null | Maaly/body-site | 18 | null | transformers | 8,735 | body-site model is a Named Entity Recognition (NER) model that identifies and annotates the body-site of microbiome samples in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations
Testing examples:
1. Scalp hair was collected from behind the right ear, near the right retroauricular crease, and pubic hair was collected from their right pubis, near the right inguinal crease.
2. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
3. TSO modulate the IEC and LPMC transcriptome To gain further insights into the mechanisms of TSO treatment, we performed genome wide expression analysis on intestinal epithelial cells (IEC) and lamina propria mononuclear cells (LPMC) isolated from caecum samples by RNA sequencing (RNAseq).
4. Two catheters were bilaterally placed in the CA1 region of the hippocampus with the coordinates of 4.5 mm anterior to bregma, 1.6 mm ventral to the dura, and two directions of ยฑ 4.0 mm from the interaural line (Park et al. 2013; Yang et al. 2013). |
IlyaGusev/rut5_tox | 03cf6d0fc6f913774af157ecab5518ea628a2674 | 2022-07-13T15:35:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | IlyaGusev | null | IlyaGusev/rut5_tox | 18 | null | transformers | 8,736 | ---
language:
- ru
tags:
- t5
license:
- apache-2.0
inference:
parameters:
num_beams: 5
no_repeat_ngram_size: 4
widget:
- text: "ะงัะพ ััะพ ะทะฐ ะตััะฝะดะฐ?"
---
# RuT5Tox |
MoseliMotsoehli/TswanaBert | 2c47d4212adc73968d90142bd3ae440e5ad472d4 | 2021-05-20T12:13:01.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"tn",
"transformers",
"autotrain_compatible"
] | fill-mask | false | MoseliMotsoehli | null | MoseliMotsoehli/TswanaBert | 18 | null | transformers | 8,737 | ---
language: tn
---
# TswanaBert
Pretrained model on the Tswana language using a masked language modeling (MLM) objective.
## Model Description.
TswanaBERT is a transformer model pre-trained on a corpus of Setswana in a self-supervised fashion by masking part of the input words and training to predict the masks by using byte-level tokens.
## Intended uses & limitations
The model can be used for either masked language modeling or next word prediction. It can also be fine-tuned on a specific down-stream NLP application.
#### How to use
```python
>>> from transformers import pipeline
>>> from transformers import AutoTokenizer, AutoModelWithLMHead
>>> tokenizer = AutoTokenizer.from_pretrained("MoseliMotsoehli/TswanaBert")
>>> model = AutoModelWithLMHead.from_pretrained("MoseliMotsoehli/TswanaBert")
>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
>>> unmasker("Ntshopotse <mask> e godile.")
[{'score': 0.32749542593955994,
'sequence': '<s>Ntshopotse setse e godile.</s>',
'token': 538,
'token_str': 'ฤ setse'},
{'score': 0.060260992497205734,
'sequence': '<s>Ntshopotse le e godile.</s>',
'token': 270,
'token_str': 'ฤ le'},
{'score': 0.058460816740989685,
'sequence': '<s>Ntshopotse bone e godile.</s>',
'token': 364,
'token_str': 'ฤ bone'},
{'score': 0.05694682151079178,
'sequence': '<s>Ntshopotse ga e godile.</s>',
'token': 298,
'token_str': 'ฤ ga'},
{'score': 0.0565204992890358,
'sequence': '<s>Ntshopotse, e godile.</s>',
'token': 16,
'token_str': ','}]
```
#### Limitations and bias
The model is trained on a relatively small collection of setwana, mostly from news articles and creative writtings, and so is not representative enough of the language as yet.
## Training data
1. The largest portion of this dataset (10k) sentences of text, comes from the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download)
2. I Then added SABC news headlines collected by Marivate Vukosi, & Sefara Tshephisho, (2020) that is generously made available on [zenoodo](http://doi.org/10.5281/zenodo.3668495 ). This added 185 tswana sentences to my corpus.
3. I went on to add 300 more sentences by scrapping following news sites and blogs that mosty originate in Botswana. I actively continue to expand the dataset.
* http://setswana.blogspot.com/
* https://omniglot.com/writing/tswana.php
* http://www.dailynews.gov.bw/
* http://www.mmegi.bw/index.php
* https://tsena.co.bw
* http://www.botswana.co.za/Cultural_Issues-travel/botswana-country-guide-en-route.html
* https://www.poemhunter.com/poem/2013-setswana/
https://www.poemhunter.com/poem/ngwana-wa-mosetsana/
### BibTeX entry and citation info
```bibtex
@inproceedings{author = {Moseli Motsoehli},
year={2020}
}
```
|
MutazYoune/Ara_DialectBERT | 6f89a5650d3b39a100bd83419ee843613b15c680 | 2021-05-18T21:44:01.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:HARD-Arabic-Dataset",
"transformers",
"autotrain_compatible"
] | fill-mask | false | MutazYoune | null | MutazYoune/Ara_DialectBERT | 18 | null | transformers | 8,738 | ---
language: ar
datasets:
- HARD-Arabic-Dataset
---
# Ara-dialect-BERT
We used a pretrained model to further train it on [HARD-Arabic-Dataset](https://github.com/elnagara/HARD-Arabic-Dataset), the weights were initialized using [CAMeL-Lab](https://huggingface.co/CAMeL-Lab/bert-base-camelbert-msa-eighth) "bert-base-camelbert-msa-eighth" model
### Usage
The model weights can be loaded using `transformers` library by HuggingFace.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("MutazYoune/Ara_DialectBERT")
model = AutoModel.from_pretrained("MutazYoune/Ara_DialectBERT")
```
Example using `pipeline`:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="MutazYoune/Ara_DialectBERT",
tokenizer="MutazYoune/Ara_DialectBERT"
)
fill_mask("ุงูููุฏู ุฌู
ูู ู ููู [MASK] ุจุนูุฏ")
```
```python
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ููุน ุจุนูุฏ', 'score': 0.28233852982521057, 'token': 3221, 'token_str': 'ุงูู
ููุน'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ููุนู ุจุนูุฏ', 'score': 0.24436227977275848, 'token': 19218, 'token_str': 'ู
ููุนู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูู
ูุงู ุจุนูุฏ', 'score': 0.15372352302074432, 'token': 5401, 'token_str': 'ุงูู
ูุงู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ุงูููุฏู ุจุนูุฏ', 'score': 0.029026474803686142, 'token': 11133, 'token_str': 'ุงูููุฏู'}
{'sequence': 'ุงูููุฏู ุฌู
ูู ู ููู ู
ูุงูู ุจุนูุฏ', 'score': 0.024554792791604996, 'token': 10701, 'token_str': 'ู
ูุงูู'}
|
NDugar/m2m100_418M-fr | b85aaaaad9d123e17324d2e84a87252c75cf671c | 2021-12-07T20:09:49.000Z | [
"pytorch",
"m2m_100",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | translation | false | NDugar | null | NDugar/m2m100_418M-fr | 18 | 1 | transformers | 8,739 | ---
license: mit
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: m2m100_418M-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 51.1339693938271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-fr
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7021
- Bleu: 51.1340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.749 | 1.0 | 23645 | 0.7021 | 51.1344 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
PurpleJacketGuy/My_Jarvis_2 | 71e3889676c3a7697acf7af8b10af8a6271ebf42 | 2021-11-11T15:50:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | PurpleJacketGuy | null | PurpleJacketGuy/My_Jarvis_2 | 18 | null | transformers | 8,740 | ---
tags:
- conversational
---
# Jarvis DialoGPT Model |
RJ3vans/SSMNspanTagger | b3c2799a071ccf9ad47e200b28312ca39e7b528e | 2021-09-07T13:27:38.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | RJ3vans | null | RJ3vans/SSMNspanTagger | 18 | null | transformers | 8,741 | This model identifies complex NPs modified by non-finite nominal clauses ("appositives") in the input sentence.
Try the test sentence:
My name is Sarah and I live in London[,] the capital of England.
Note that accuracy is greatly improved if you place square brackets around the left boundary of the non-finite nominal clause.
The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton. |
Radvian/t5_liputan6_finetuned_indonesia_summarization | c03ad9b05f62d1e5fd635db928bebfab97e4c68b | 2021-10-04T04:29:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:Radvian/autonlp-data-indo_summarization",
"transformers",
"autonlp",
"autotrain_compatible"
] | text2text-generation | false | Radvian | null | Radvian/t5_liputan6_finetuned_indonesia_summarization | 18 | null | transformers | 8,742 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- Radvian/autonlp-data-indo_summarization
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 14502562
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP", "parameters":{"max_length":1000}}' https://api-inference.huggingface.co/Radvian/autonlp-indo_summarization-14502562
``` |
RishabhRawatt/DialoGPT-small-Rickmorty | 9765c6355d0903778af326c8d93a5cac6cb5ebfa | 2021-09-05T10:08:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | RishabhRawatt | null | RishabhRawatt/DialoGPT-small-Rickmorty | 18 | null | transformers | 8,743 | ---
tags:
- conversational
---
# Rick Morty DialogGPT Model |
Rolv-Arild/xls-r-300m-npsc-2 | 008c3baf7df947c3bf9e59d98f2cb520ce89710d | 2022-02-01T12:54:36.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Rolv-Arild | null | Rolv-Arild/xls-r-300m-npsc-2 | 18 | null | transformers | 8,744 | Entry not found |
SEBIS/code_trans_t5_large_source_code_summarization_python_multitask | c662d5ce28d50b620de20e9c361631a3ed29e315 | 2021-06-23T09:15:47.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_source_code_summarization_python_multitask | 18 | null | transformers | 8,745 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization Python
Pretrained model on programming language python using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the Python function or be fine-tuned on other Python code tasks. It can be used on unparsed and untokenized Python code. However, if the Python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate Python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Training
The model was trained on a single TPU Pod V3-8 for 80,000 steps, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. (We have trained in total 260,000 steps.)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| State of the art | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_code_documentation_generation_php | de4f38660371216b86ed5e007685b7ed8d0115c4 | 2021-06-23T10:07:07.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_php | 18 | null | transformers | 8,746 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus php dataset.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/php/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
StevenLimcorn/MelayuBERT | be522cff4a2bf65a839babd232e45414563d1361 | 2021-06-22T06:37:24.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"ms",
"dataset:oscar",
"arxiv:1810.04805",
"transformers",
"melayu-bert",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | StevenLimcorn | null | StevenLimcorn/MelayuBERT | 18 | null | transformers | 8,747 | ---
language: ms
tags:
- melayu-bert
license: mit
datasets:
- oscar
widget:
- text: "Saya [MASK] makan nasi hari ini."
---
## Melayu BERT
Melayu BERT is a masked language model based on [BERT](https://arxiv.org/abs/1810.04805). It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_original_ms` subset. The model used was [English BERT model](https://huggingface.co/bert-base-uncased) and fine-tuned on the Malaysian dataset. The model achieved a perplexity of 9.46 on a 20% validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou). The model is available both for PyTorch and TensorFlow use.
## Model
The model was trained on 3 epochs with a learning rate of 2e-3 and achieved a training loss per steps as shown below.
| Step |Training loss|
|--------|-------------|
|500 | 5.051300 |
|1000 | 3.701700 |
|1500 | 3.288600 |
|2000 | 3.024000 |
|2500 | 2.833500 |
|3000 | 2.741600 |
|3500 | 2.637900 |
|4000 | 2.547900 |
|4500 | 2.451500 |
|5000 | 2.409600 |
|5500 | 2.388300 |
|6000 | 2.351600 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/MelayuBERT"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Saya [MASK] makan nasi hari ini.")
```
### Import Tokenizer and Model
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("StevenLimcorn/MelayuBERT")
model = AutoModelForMaskedLM.from_pretrained("StevenLimcorn/MelayuBERT")
```
## Author
Melayu BERT was trained by [Steven Limcorn](https://github.com/stevenlimcorn) and [Wilson Wongso](https://hf.co/w11wo). |
Wiirin/BioBERT-finetuned-PubMed-FoodCancer | acd5a4c6530c0baeb24b0a7c63f8005cb1c41bc4 | 2021-11-08T09:37:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Wiirin | null | Wiirin/BioBERT-finetuned-PubMed-FoodCancer | 18 | null | transformers | 8,748 | Entry not found |
aware-ai/xlmroberta-squadv2 | c3dc743e801a1b0133b56b9940c3721d82e0fe7c | 2020-12-11T21:31:05.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad_v2",
"arxiv:1911.02116",
"transformers",
"autotrain_compatible"
] | question-answering | false | aware-ai | null | aware-ai/xlmroberta-squadv2 | 18 | null | transformers | 8,749 | ---
datasets:
- squad_v2
---
# XLM-ROBERTA-LARGE finetuned on SQuADv2
This is xlm-roberta-large model finetuned on SQuADv2 dataset for question answering task
## Model details
XLM-Roberta was propsed in the [paper](https://arxiv.org/pdf/1911.02116.pdf) **XLM-R: State-of-the-art cross-lingual understanding through self-supervision
## Model training
This model was trained with following parameters using simpletransformers wrapper:
```
train_args = {
'learning_rate': 1e-5,
'max_seq_length': 512,
'doc_stride': 512,
'overwrite_output_dir': True,
'reprocess_input_data': False,
'train_batch_size': 8,
'num_train_epochs': 2,
'gradient_accumulation_steps': 2,
'no_cache': True,
'use_cached_eval_features': False,
'save_model_every_epoch': False,
'output_dir': "bart-squadv2",
'eval_batch_size': 32,
'fp16_opt_level': 'O2',
}
```
## Results
```{"correct": 6961, "similar": 4359, "incorrect": 553, "eval_loss": -12.177856394381962}```
## Model in Action ๐
```python3
from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering
import torch
tokenizer = XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2')
model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors='pt')
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2]
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer = tokenizer.convert_tokens_to_ids(answer.split())
answer = tokenizer.decode(answer)
#answer => 'a nice puppet'
```
> Created with โค๏ธ by A-ware UG [](https://github.com/aware-ai)
|
aXhyra/presentation_hate_42 | 3cdddfa358fcf3fbdfbb567bca944a89072290ef | 2021-12-15T11:18:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | aXhyra | null | aXhyra/presentation_hate_42 | 18 | null | transformers | 8,750 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: presentation_hate_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: hate
metrics:
- name: F1
type: f1
value: 0.7692074096568478
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_hate_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8711
- F1: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.436235805743952e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5207 | 1.0 | 282 | 0.4815 | 0.7513 |
| 0.3047 | 2.0 | 564 | 0.5557 | 0.7510 |
| 0.2335 | 3.0 | 846 | 0.6627 | 0.7585 |
| 0.0056 | 4.0 | 1128 | 0.8711 | 0.7692 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
abdouaziiz/bert-base-wolof | 931394df80af979a0eed0067ec34a122395b5fbe | 2021-11-25T16:35:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"wo",
"transformers",
"language-model",
"wolof",
"autotrain_compatible"
] | fill-mask | false | abdouaziiz | null | abdouaziiz/bert-base-wolof | 18 | null | transformers | 8,751 | ---
language: wo
tags:
- bert
- language-model
- wo
- wolof
---
# Soraberta: Unsupervised Language Model Pre-training for Wolof
**bert-base-wolof** is pretrained bert-base model on wolof language .
## Soraberta models
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `bert-base` | 6 | 12 | 514 | 56931622 M |
## Using Soraberta with Hugging Face's Transformers
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='abdouaziiz/bert-base-wolof')
>>> unmasker("kuy yoot du [MASK].")
[{'sequence': '[CLS] kuy yoot du seqet. [SEP]',
'score': 0.09505125880241394,
'token': 13578},
{'sequence': '[CLS] kuy yoot du daw. [SEP]',
'score': 0.08882280439138412,
'token': 679},
{'sequence': '[CLS] kuy yoot du yoot. [SEP]',
'score': 0.057790059596300125,
'token': 5117},
{'sequence': '[CLS] kuy yoot du seqat. [SEP]',
'score': 0.05671025067567825,
'token': 4992},
{'sequence': '[CLS] kuy yoot du yaqu. [SEP]',
'score': 0.0469999685883522,
'token': 1735}]
```
## Training data
The data sources are [Bible OT](http://biblewolof.com/) , [WOLOF-ONLINE](http://www.wolof-online.com/)
[ALFFA_PUBLIC](https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
## Contact
Please contact [email protected] for any question, feedback or request. |
abryee/TigXLNet | a98a20399af0bea6af37f271a265862fe11c0064 | 2021-09-21T08:06:12.000Z | [
"pytorch",
"xlnet",
"arxiv:2006.07698",
"transformers"
] | null | false | abryee | null | abryee/TigXLNet | 18 | null | transformers | 8,752 | # Transferring Monolingual Model to Low-Resource Language: The Case Of Tigrinya:
## Proposed Method:
<img src="data/proposed.png" height = "330" width ="760" >
The proposed method transfers a mono-lingual Transformer model into new target language at lexical level by learning new token embeddings. All implementation in this repo uses XLNet as a source Transformer model, however, other Transformer models can also be used similarly.
## Main files:
All files are IPython Notebook files which can be excuted simply in Google Colab.
- train.ipynb : Fine-tunes XLNet (mono-lingual transformer) on new target language (Tigrinya) sentiment analysis dataset. [](https://colab.research.google.com/drive/1bSSrKE-TSphUyrNB2UWhFI-Bkoz0a5l0?usp=sharing)
- test.ipynb : Evaluates the fine-tuned model on test data. [](https://colab.research.google.com/drive/17R1lvRjxILVNk971vzZT79o2OodwaNIX?usp=sharing)
- token_embeddings.ipynb : Trains a word2vec token embeddings for Tigrinya language. [](https://colab.research.google.com/drive/1hCtetAllAjBw28EVQkJFpiKdFtXmuxV7?usp=sharing)
- process_Tigrinya_comments.ipynb : Extracts Tigrinya comments from mixed language contents. [](https://colab.research.google.com/drive/1-ndLlBV-iLZNBW3Z8OfKAqUUCjvGbdZU?usp=sharing)
- extract_YouTube_comments.ipynb : Downloads available comments from a YouTube channel ID. [](https://colab.research.google.com/drive/1b7G85wHKe18y45JIDtvDJdO5dOkRmDdp?usp=sharing)
- auto_labelling.ipynb : Automatically labels Tigrinya comments in to positive or negative sentiments based on [Emoji's sentiment](http://kt.ijs.si/data/Emoji_sentiment_ranking/). [](https://colab.research.google.com/drive/1wnZf7CBBCIr966vRUITlxKCrANsMPpV7?usp=sharing)
## Tigrinya Tokenizer:
A [sentencepiece](https://github.com/google/sentencepiece) based tokenizer for Tigrinya has been released to the public and can be accessed as in the following:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("abryee/TigXLNet")
tokenizer.tokenize("แแแแ แฅแ แแแ แซแฅแฐแ แแตแแแ แแแฒ แขแซ แ แฅแฃแแ แขแ แแแตแแ แแแฒ แญแฅแ แฐแแจ แแแน แแฐแซแฃแนแ แฃแฅ แแแนแ แฐแจแญแก")
## TigXLNet:
A new general purpose transformer model for low-resource language Tigrinya is also released to the public and be accessed as in the following:
from transformers import AutoConfig, AutoModel
config = AutoConfig.from_pretrained("abryee/TigXLNet")
config.d_head = 64
model = AutoModel.from_pretrained("abryee/TigXLNet", config=config)
## Evaluation:
The proposed method is evaluated using two datasets:
- A newly created sentiment analysis dataset for low-resource language (Tigriyna).
<table>
<tr>
<td> <table>
<thead>
<tr>
<th><sub>Models</sub></th>
<th><sub>Configuration</sub></th>
<th><sub>F1-Score</sub></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan=3><sub>BERT</sub></td>
<td rowspan=1><sub>+Frozen BERT weights</sub></td>
<td><sub>54.91</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><sub>74.26</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><sub>76.35</sub></td>
</tr>
<tr>
<td rowspan=3><sub>mBERT</sub></td>
<td rowspan=1><sub>+Frozen mBERT weights</sub></td>
<td><sub>57.32</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><sub>76.01</sub></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><sub>77.51</sub></td>
</tr>
<tr>
<td rowspan=3><sub>XLNet</sub></td>
<td rowspan=1><sub>+Frozen XLNet weights</sub></td>
<td><strong><sub>68.14</sub></strong></td>
</tr>
<tr>
<td rowspan=1><sub>+Random embeddings</sub></td>
<td><strong><sub>77.83</sub></strong></td>
</tr>
<tr>
<td rowspan=1><sub>+Frozen token embeddings</sub></td>
<td><strong><sub>81.62</sub></strong></td>
</tr>
</tbody>
</table> </td>
<td><img src="data/effect_of_dataset_size.png" alt="3" width = 480px height = 280px></td>
</tr>
</table>
- Cross-lingual Sentiment dataset ([CLS](https://zenodo.org/record/3251672#.Xs65VzozbIU)).
<table>
<thead>
<tr>
<th rowspan=2><sub>Models</sub></th>
<th rowspan=1 colspan=3><sub>English</sub></th>
<th rowspan=1 colspan=3><sub>German</sub></th>
<th rowspan=1 colspan=3><sub>French</sub></th>
<th rowspan=1 colspan=3><sub>Japanese</sub></th>
<th rowspan=2><sub>Average</sub></th>
</tr>
<tr>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
<th colspan=1><sub>Books</sub></th>
<th colspan=1><sub>DVD</sub></th>
<th colspan=1><sub>Music</sub></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan=1><sub>XLNet</sub></td>
<td colspan=1><sub><strong>92.90</strong></sub></td>
<td colspan=1><sub><strong>93.31</strong></sub></td>
<td colspan=1><sub><strong>92.02</strong></sub></td>
<td colspan=1><sub>85.23</sub></td>
<td colspan=1><sub>83.30</sub></td>
<td colspan=1><sub>83.89</sub></td>
<td colspan=1><sub>73.05</sub></td>
<td colspan=1><sub>69.80</sub></td>
<td colspan=1><sub>70.12</sub></td>
<td colspan=1><sub>83.20</sub></td>
<td colspan=1><sub><strong>86.07</strong></sub></td>
<td colspan=1><sub>85.24</sub></td>
<td colspan=1><sub>83.08</sub></td>
</tr>
<tr>
<td colspan=1><sub>mBERT</sub></td>
<td colspan=1><sub>92.78</sub></td>
<td colspan=1><sub>90.30</sub></td>
<td colspan=1><sub>91.88</sub></td>
<td colspan=1><sub><strong>88.65</strong></sub></td>
<td colspan=1><sub><strong>85.85</strong></sub></td>
<td colspan=1><sub><strong>90.38</strong></sub></td>
<td colspan=1><sub><strong>91.09</strong></sub></td>
<td colspan=1><sub><strong>88.57</strong></sub></td>
<td colspan=1><sub><strong>93.67</strong></sub></td>
<td colspan=1><sub><strong>84.35</strong></sub></td>
<td colspan=1><sub>81.77</sub></td>
<td colspan=1><sub><strong>87.53</strong></sub></td>
<td colspan=1><sub><strong>88.90</strong></sub></td>
</tr>
</tbody>
</table>
## Dataset used for this paper:
We have constructed new sentiment analysis dataset for Tigrinya language and it can be found in the zip file (Tigrinya Sentiment Analysis Dataset)
## Citing our paper:
Our paper can be accessed from ArXiv [link](https://arxiv.org/pdf/2006.07698.pdf), and please consider citing our work.
@misc{tela2020transferring,
title={Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya},
author={Abrhalei Tela and Abraham Woubie and Ville Hautamaki},
year={2020},
eprint={2006.07698},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
## Any questions, comments, feedback is appreciated! And can be forwarded to the following email: [email protected]
|
addy88/gpt-j-8bit | 33582ecfc865cec71439aba1a5f89363a8094e37 | 2022-01-02T06:34:27.000Z | [
"pytorch",
"gptj",
"text-generation",
"arxiv:2106.09685",
"arxiv:2110.02861",
"transformers"
] | text-generation | false | addy88 | null | addy88/gpt-j-8bit | 18 | 1 | transformers | 8,753 | This Model is 8bit Version of EleutherAI/gpt-j-6B. It is converted by Facebook's bitsandbytes library. The original GPT-J takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. So for finetuning on single GPU This model is converted into 8bit.
Here's how to run it: [](https://colab.research.google.com/drive/1KNf5siQdM7ILQM-pHsP6gNVPKl1SJdU1)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://colab.research.google.com/drive/1FxGeYQyE7cx9VNCBC4gUyRVZGORW7c6g) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Can I use this technique with other models?
The model was converted using [this notebook](https://colab.research.google.com/drive/1rwxh0XRdVi8VEbTx97l9xXr4JbRhZaq5#scrollTo=CX3VHn-J1Zer). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining | 865b80840ac3c350f0a134ab223b1504d39a20ba | 2021-07-12T14:51:51.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | airKlizz | null | airKlizz/xlm-roberta-base-germeval21-toxic-with-task-specific-pretraining | 18 | null | transformers | 8,754 | Entry not found |
airesearch/bert-base-multilingual-cased-finetune-qa | e77e3864b649bacb6cca5a550fde234b5ed2f722 | 2021-07-14T05:50:52.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | airesearch | null | airesearch/bert-base-multilingual-cased-finetune-qa | 18 | null | transformers | 8,755 | ---
widget:
- text: "เธชเธงเธเธเธธเธซเธฅเธฒเธเนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธญเธฐเนเธฃ"
context: "เนเธฃเธเนเธฃเธตเธขเธเธชเธงเธเธเธธเธซเธฅเธฒเธเธงเธดเธเธขเธฒเธฅเธฑเธข (Suankularb Wittayalai School) (เธญเธฑเธเธฉเธฃเธขเนเธญ : เธช.เธ. / S.K.) เนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธเธฒเธขเธฅเนเธงเธ เธฃเธฐเธเธฑเธเธเธฑเนเธเธกเธฑเธเธขเธกเธจเธถเธเธฉเธฒเธเธเธฒเธเนเธซเธเนเธเธดเนเธจเธฉ เธชเธฑเธเธเธฑเธเธชเธณเธเธฑเธเธเธฒเธเนเธเธเธเธทเนเธเธเธตเนเธเธฒเธฃเธจเธถเธเธฉเธฒเธกเธฑเธเธขเธกเธจเธถเธเธฉเธฒเนเธเธ 1 เธชเธณเธเธฑเธเธเธฒเธเธเธเธฐเธเธฃเธฃเธกเธเธฒเธฃเธเธฒเธฃเธจเธถเธเธฉเธฒเธเธฑเนเธเธเธทเนเธเธเธฒเธ (เธเธทเนเธญเนเธเธดเธก: เธเธฃเธกเธชเธฒเธกเธฑเธเธจเธถเธเธฉเธฒ) เธเธฃเธฐเธเธฃเธงเธเธจเธถเธเธฉเธฒเธเธดเธเธฒเธฃ เธเนเธญเธเธฑเนเธเนเธเธข เธเธฃเธฐเธเธฒเธเธชเธกเนเธเนเธเธเธฃเธฐเธเธธเธฅเธเธญเธกเนเธเธฅเนเธฒเนเธเนเธฒเธญเธขเธนเนเธซเธฑเธง เนเธเนเธฃเธฑเธเธเธฒเธฃเธชเธเธฒเธเธเธฒเธเธถเนเธเนเธเธงเธฑเธเธเธตเน 8 เธกเธตเธเธฒเธเธก เธ.เธจ. 2424 (เธเธเธฐเธเธฑเนเธเธเธฑเธเธงเธฑเธเธเธตเน 1 เนเธกเธฉเธฒเธขเธ เนเธเนเธเธงเธฑเธเธเธถเนเธเธเธตเนเธซเธกเน เนเธกเธทเนเธญเธเธฑเธเธญเธขเนเธฒเธเธชเธฒเธเธฅเธเธทเธญเนเธเนเธ เธ.เธจ. 2425) เนเธเธขเนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธฃเธฑเธเธเธฒเธฅเนเธซเนเธเนเธฃเธเธเธญเธเธเธฃเธฐเนเธเธจเนเธเธข"
---
# bert-base-multilingual-cased
Finetuning `bert-base-multilingual-cased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=bert-base-multilingual-cased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
``` |
andi611/bert-large-uncased-ner-conll2003 | ab80bee2b4bb1356bb17e0ae71560c413c5a6622 | 2021-07-04T14:38:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | andi611 | null | andi611/bert-large-uncased-ner-conll2003 | 18 | null | transformers | 8,756 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-large-uncased-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9877039414110284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-ner
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9465
- Recall: 0.9568
- F1: 0.9517
- Accuracy: 0.9877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1702 | 1.0 | 878 | 0.0578 | 0.9202 | 0.9347 | 0.9274 | 0.9836 |
| 0.0392 | 2.0 | 1756 | 0.0601 | 0.9306 | 0.9448 | 0.9377 | 0.9851 |
| 0.0157 | 3.0 | 2634 | 0.0517 | 0.9405 | 0.9544 | 0.9474 | 0.9875 |
| 0.0057 | 4.0 | 3512 | 0.0591 | 0.9465 | 0.9568 | 0.9517 | 0.9877 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
arnolfokam/bert-base-uncased-swa | 5a712c5657361aa4742d4fdbd7091f99209222bc | 2021-11-24T11:55:34.000Z | [
"pytorch",
"bert",
"token-classification",
"swa",
"dataset:masakhaner",
"transformers",
"NER",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | arnolfokam | null | arnolfokam/bert-base-uncased-swa | 18 | null | transformers | 8,757 | ---
language:
- swa
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
---
# Model description
**bert-base-uncased-swa** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-swa**| 83.38 | 89.32 | 86.26
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("bert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
``` |
auday/paraphraser_model2 | 01c537cb8eea999f2396aac169e325089c1e0713 | 2021-06-23T11:30:45.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | auday | null | auday/paraphraser_model2 | 18 | null | transformers | 8,758 | This folder contain a Google T5 Transformer Fine-tuned to generate paraphrases using:
- Quora_pair_train 134337 lines of pair sentences 14 Mbytes
- Quora_pair_val 14928 lines of pair sentences 1.6 Mbytes
training epoch: 6
Start Time: Sun Mar 14 18:27:15 2021
End Time: Sun Mar 14 22:19:00 2021
|
baykenney/bert-large-gpt2detector-topp96 | bc453feb4a5db8afd23d942f8b921a9b2330d080 | 2021-05-19T12:26:23.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | baykenney | null | baykenney/bert-large-gpt2detector-topp96 | 18 | null | transformers | 8,759 | Entry not found |
beomi/beep-klue-roberta-base-hate | 8899a71760fbb528d861e342456bfb8ce77866df | 2021-10-23T06:00:53.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | beomi | null | beomi/beep-klue-roberta-base-hate | 18 | null | transformers | 8,760 | Entry not found |
boronbrown48/topic_otherTopics_v2 | 4ada66be13a006ed98ade81c44f571fcf5033cdb | 2021-11-25T05:21:06.000Z | [
"pytorch",
"camembert",
"text-classification",
"transformers"
] | text-classification | false | boronbrown48 | null | boronbrown48/topic_otherTopics_v2 | 18 | null | transformers | 8,761 | Entry not found |
cahya/roberta-base-indonesian-1.5G | b2fd096430f23671629a5f7fb8bf357aac29c6b3 | 2021-05-20T14:39:51.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | cahya | null | cahya/roberta-base-indonesian-1.5G | 18 | 1 | transformers | 8,762 | Entry not found |
camilodefelipe/t5_squad_v1 | e5d6d8f90afe97ccadfb575e7b1f14757302aaeb | 2021-11-12T06:28:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | camilodefelipe | null | camilodefelipe/t5_squad_v1 | 18 | null | transformers | 8,763 | Entry not found |
chinhon/pegasus-newsroom-commentaries_hdwriter | 8a0e533f4cb07eb249fe85a72d60494c326bc2ea | 2022-01-14T12:57:41.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/pegasus-newsroom-commentaries_hdwriter | 18 | 1 | transformers | 8,764 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-newsroom-commentaries_hdwriter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-commentaries_hdwriter
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5316
- Rouge1: 21.4079
- Rouge2: 6.2399
- Rougel: 16.6644
- Rougelsum: 17.8501
- Gen Len: 34.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6327 | 1.0 | 4710 | 2.5474 | 20.9392 | 6.1702 | 16.3859 | 17.5963 | 35.6626 |
| 2.4322 | 2.0 | 9420 | 2.5198 | 21.4026 | 6.1811 | 16.5874 | 17.8207 | 34.5976 |
| 2.2703 | 3.0 | 14130 | 2.5316 | 21.4079 | 6.2399 | 16.6644 | 17.8501 | 34.4111 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
climatebert/distilroberta-base-climate-d-s | b133d6c58cf9c60ee3b0abda664cace43713384b | 2021-10-26T08:22:50.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:2110.12010",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | climatebert | null | climatebert/distilroberta-base-climate-d-s | 18 | 3 | transformers | 8,765 | ---
language: en
license: apache-2.0
---
Using the [DistilRoBERTa](https://huggingface.co/distilroberta-base) model as starting point, the ClimateBERT Language Model is additionally pretrained on a text corpus comprising climate-related research paper abstracts, corporate and general news and reports from companies. The underlying methodology can be found in our [language model research paper](https://arxiv.org/abs/2110.12010).
### BibTeX entry and citation info
```bibtex
@article{wkbl2021,
title={ClimateBERT: A Pretrained Language Model for Climate-Related Text},
author={Webersinke, Nicolas and Kraus, Mathias and Bingler, Julia and Leippold, Markus},
journal={arXiv preprint arXiv:2110.12010},
year={2021}
}
``` |
coppercitylabs/uzbek-news-category-classifier | e4de92fb1360a2794c719805e3da1da6876edc09 | 2021-09-22T08:17:53.000Z | [
"pytorch",
"bert",
"text-classification",
"uz",
"dataset:webcrawl",
"transformers",
"uzbek",
"cyrillic",
"news category classifier",
"license:mit"
] | text-classification | false | coppercitylabs | null | coppercitylabs/uzbek-news-category-classifier | 18 | 1 | transformers | 8,766 | ---
language: uz
tags:
- uzbek
- cyrillic
- news category classifier
license: mit
datasets:
- webcrawl
---
# Uzbek news category classifier (based on UzBERT)
UzBERT fine-tuned to classify news articles into one of the following
categories:
- ะดัะฝั
- ะถะฐะผะธัั
- ะถะธะฝะพัั
- ะธาัะธัะพะดะธัั
- ะผะฐะดะฐะฝะธัั
- ัะตะบะปะฐะผะฐ
- ัะฐะปะพะผะฐัะปะธะบ
- ัะธััะฐั
- ัะฟะพัั
- ัะฐะฝ ะฒะฐ ัะตั
ะฝะธะบะฐ
- ัะพั-ะฑะธะทะฝะตั
## How to use
```python
>>> from transformers import pipeline
>>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier')
>>> text = """ะะฐาณะพัะฐัะปะธ ะฟะฐัะฐ-ะตะฝะณะธะป ะฐัะปะตัะธะบะฐัะธะผะธะท าฒััะฝะธะดะดะธะฝ ะะพัะฑะตะบะพะฒ ะขะพะบะธะพ-2020 ะะฐัะฐะปะธะผะฟะธั ัะนะธะฝะปะฐัะธะดะฐ าะฐะปะฐะฑะฐ าะพะทะพะฝะธะฑ, ะดะตะปะตะณะฐัะธัะผะธะท าณะธัะพะฑะธะณะฐ ะฝะฐะฒะฑะฐัะดะฐะณะธ ะพะปัะธะฝ ะผะตะดะฐะปะฝะธ ะบะตะปัะธัะดะธ. ะั าณะฐาะดะฐ ะะา ั
ะฐะฑะฐั ะฑะตัะดะธ.
ะะพัะฑะตะบะพะฒ าณะพะทะธัะณะธะฝะฐ ัะดัะพ ัะปะพาัะธัะธั ะดะฐััััะธะดะฐ ัะท าะฐะปะฐะฑะฐัะธะฝะธ ัะฐะฝัะฐะฝะฐ าะธะปะดะธ. ะฃัะฑั ะผะฐัาะดะฐ ะฒะฐะบะธะปะธะผะธะท 16:13 ะผะตัั ะฝะฐัะธะถะฐ ะฑะธะปะฐะฝ ัะฝะณ ัั
ัะธ ะบัััะฐัะบะธัะฝะธ าะฐะนะด ััะดะธ.
ะจั ัะฐัะธาะฐ, ะดะตะปะตะณะฐัะธัะผะธะท าณะธัะพะฑะธะดะฐะณะธ ะผะตะดะฐะปะปะฐั ัะพะฝะธ 16 (6 ัะฐ ะพะปัะธะฝ, 4 ัะฐ ะบัะผัั ะฒะฐ 6 ัะฐ ะฑัะพะฝะทะฐ) ัะฐะณะฐ ะตัะดะธ. ะะตะนะธะฝะณะธ ะบัะฝ ะดะฐััััะปะฐัะธะดะฐ ะธััะธัะพะบ ััะฐะดะธะณะฐะฝ าณะฐะผัััะปะฐัะธะผะธะทะณะฐ ะพะผะฐะด ัะธะปะฐะฑ าะพะปะฐะผะธะท!๏ปฟ"""
>>> classifier(text)
[{'label': 'ัะฟะพัั', 'score': 0.9865401983261108}]
```
## Fine-tuning data
Fine-tuned on ~60K news articles for 3 epochs.
|
cstorm125/wangchanberta-base-att-spm-uncased-finetune-qa | 2cd542e8d17dc3c60392eed3e86f9bc6bcb6b49e | 2021-07-14T07:24:50.000Z | [
"pytorch",
"camembert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | cstorm125 | null | cstorm125/wangchanberta-base-att-spm-uncased-finetune-qa | 18 | null | transformers | 8,767 | ---
widget:
- text: "เธชเธงเธเธเธธเธซเธฅเธฒเธเนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธญเธฐเนเธฃ"
context: "เนเธฃเธเนเธฃเธตเธขเธเธชเธงเธเธเธธเธซเธฅเธฒเธเธงเธดเธเธขเธฒเธฅเธฑเธข (Suankularb Wittayalai School) (เธญเธฑเธเธฉเธฃเธขเนเธญ : เธช.เธ. / S.K.) เนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธเธฒเธขเธฅเนเธงเธ เธฃเธฐเธเธฑเธเธเธฑเนเธเธกเธฑเธเธขเธกเธจเธถเธเธฉเธฒเธเธเธฒเธเนเธซเธเนเธเธดเนเธจเธฉ เธชเธฑเธเธเธฑเธเธชเธณเธเธฑเธเธเธฒเธเนเธเธเธเธทเนเธเธเธตเนเธเธฒเธฃเธจเธถเธเธฉเธฒเธกเธฑเธเธขเธกเธจเธถเธเธฉเธฒเนเธเธ 1 เธชเธณเธเธฑเธเธเธฒเธเธเธเธฐเธเธฃเธฃเธกเธเธฒเธฃเธเธฒเธฃเธจเธถเธเธฉเธฒเธเธฑเนเธเธเธทเนเธเธเธฒเธ (เธเธทเนเธญเนเธเธดเธก: เธเธฃเธกเธชเธฒเธกเธฑเธเธจเธถเธเธฉเธฒ) เธเธฃเธฐเธเธฃเธงเธเธจเธถเธเธฉเธฒเธเธดเธเธฒเธฃ เธเนเธญเธเธฑเนเธเนเธเธข เธเธฃเธฐเธเธฒเธเธชเธกเนเธเนเธเธเธฃเธฐเธเธธเธฅเธเธญเธกเนเธเธฅเนเธฒเนเธเนเธฒเธญเธขเธนเนเธซเธฑเธง เนเธเนเธฃเธฑเธเธเธฒเธฃเธชเธเธฒเธเธเธฒเธเธถเนเธเนเธเธงเธฑเธเธเธตเน 8 เธกเธตเธเธฒเธเธก เธ.เธจ. 2424 (เธเธเธฐเธเธฑเนเธเธเธฑเธเธงเธฑเธเธเธตเน 1 เนเธกเธฉเธฒเธขเธ เนเธเนเธเธงเธฑเธเธเธถเนเธเธเธตเนเธซเธกเน เนเธกเธทเนเธญเธเธฑเธเธญเธขเนเธฒเธเธชเธฒเธเธฅเธเธทเธญเนเธเนเธ เธ.เธจ. 2425) เนเธเธขเนเธเนเธเนเธฃเธเนเธฃเธตเธขเธเธฃเธฑเธเธเธฒเธฅเนเธซเนเธเนเธฃเธเธเธญเธเธเธฃเธฐเนเธเธจเนเธเธข"
---
# airesearch/wangchanberta-base-att-spm-uncased
Finetuning `airesearch/wangchanberta-base-att-spm-uncased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearch/wangchanberta-base-att-spm-uncased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--lowercase \
--pad_on_right \
--fp16
``` |
danicodes/autonlp-legal-text-summary-457311749 | fdfd2fbdf6c5528ac559ee452f468fe21e0faeab | 2021-12-29T22:18:48.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:danicodes/autonlp-data-legal-text-summary",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | danicodes | null | danicodes/autonlp-legal-text-summary-457311749 | 18 | null | transformers | 8,768 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- danicodes/autonlp-data-legal-text-summary
co2_eq_emissions: 10.148805588432941
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 457311749
- CO2 Emissions (in grams): 10.148805588432941
## Validation Metrics
- Loss: 1.647747278213501
- Rouge1: 32.4854
- Rouge2: 19.8974
- RougeL: 30.0602
- RougeLsum: 29.9377
- Gen Len: 46.6556
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/danicodes/autonlp-legal-text-summary-457311749
``` |
deepampatel/roberta-mlm-marathi | 88bfb8d8c71aeba202aa1dfc150bb7659013c58b | 2021-05-20T15:58:32.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"mr",
"transformers",
"autotrain_compatible"
] | fill-mask | false | deepampatel | null | deepampatel/roberta-mlm-marathi | 18 | null | transformers | 8,769 | ---
language: "mr"
---
# Welcome to Roberta-Marathi-MLM
## Model Description
> This is a small language model for [Marathi](https://en.wikipedia.org/wiki/Marathi) language with 1M data samples taken from
[OSCAR page](https://oscar-public.huma-num.fr/shuffled/mr_dedup.txt.gz)
## Training params
- **Dataset** - 1M data samples are used to train this model from OSCAR page(https://oscar-corpus.com/) eventhough data set is of 2.7 GB due to resource constraint to train
I have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.
- **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by รฐลธยคโ
<!-- - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2
__Trainer__ : num_train_epochs=12 - trained for 12 epochs
per_gpu_train_batch_size=64 - batch size for the datasamples is 64
save_steps=10_000 - save model for every 10k steps
save_total_limit=2 - save limit is set for 2 -->
**Intended uses & limitations**
this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases.
**Whatever else is helpful!**
If you are intersted in collaboration feel free to reach me [Deepam](mailto:[email protected])
|
devansvd/bert-model-test-2 | 0447f102321464ad3e2ac84e573d50ae4d5ca7f4 | 2021-05-19T15:39:56.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | devansvd | null | devansvd/bert-model-test-2 | 18 | null | transformers | 8,770 | Entry not found |
ehdwns1516/gpt2_review_star5 | ffff22224ae9f03ba7964c8804394563bc8ff627 | 2021-07-23T01:07:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ehdwns1516 | null | ehdwns1516/gpt2_review_star5 | 18 | null | transformers | 8,771 | # gpt2_review_star5
* This model has been trained as a review_body dataset with a star of 5 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
* Input text what you want to generate review.
* If the context is longer than 1200 characters, the context may be cut in the middle and the result may not come out well.
review generator DEMO: [Ainize DEMO](https://main-review-generator-ehdwns1516.endpoint.ainize.ai/)
review generator API: [Ainize API](https://ainize.web.app/redirect?git_repo=https://github.com/ehdwns1516/review_generator)
## Model links for each 1 to 5 star
* [ehdwns1516/gpt2_review_star1](https://huggingface.co/ehdwns1516/gpt2_review_star1)
* [ehdwns1516/gpt2_review_star2](https://huggingface.co/ehdwns1516/gpt2_review_star2)
* [ehdwns1516/gpt2_review_star3](https://huggingface.co/ehdwns1516/gpt2_review_star3)
* [ehdwns1516/gpt2_review_star4](https://huggingface.co/ehdwns1516/gpt2_review_star4)
* [ehdwns1516/gpt2_review_star5](https://huggingface.co/ehdwns1516/gpt2_review_star5)
## Overview
Language model: [gpt2](https://huggingface.co/gpt2)
Language: English
Training data: review_body dataset with a star of 5 in the [amazon_review dataset](https://huggingface.co/datasets/amazon_reviews_multi).
Code: See [Ainize Workspace](https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ehdwns1516/gpt2_review_fine-tunning_note)
## Usage
## In Transformers
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/gpt2_review_star5")
model = AutoModelWithLMHead.from_pretrained("ehdwns1516/gpt2_review_star5")
generator = pipeline(
"text-generation",
model="ehdwns1516/gpt2_review_star5",
tokenizer=tokenizer
)
context = "your context"
result = dict()
result[0] = generator(context)[0]
```
|
ekkasilina/big_baseline | 0c1e97ab9da06ab6ac73c33dd8325b2040449e0c | 2021-11-01T11:24:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ekkasilina | null | ekkasilina/big_baseline | 18 | null | transformers | 8,772 | Entry not found |
emrecan/bert-base-multilingual-cased-multinli_tr | 34672963e95d65dcb94071a698b039929290465d | 2021-12-01T19:45:01.000Z | [
"pytorch",
"bert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
] | zero-shot-classification | false | emrecan | null | emrecan/bert-base-multilingual-cased-multinli_tr | 18 | null | transformers | 8,773 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
widget:
- text: "Dolar yรผkselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo รงok saรงmaydฤฑ, beฤendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
|
emrecan/convbert-base-turkish-mc4-cased-allnli_tr | 84be0ca74dbb0ac436ee46eef0ddd0f6b47cd579 | 2021-12-02T14:57:01.000Z | [
"pytorch",
"convbert",
"text-classification",
"tr",
"dataset:nli_tr",
"transformers",
"zero-shot-classification",
"nli",
"license:apache-2.0"
] | zero-shot-classification | false | emrecan | null | emrecan/convbert-base-turkish-mc4-cased-allnli_tr | 18 | 1 | transformers | 8,774 | ---
language:
- tr
tags:
- zero-shot-classification
- nli
- pytorch
pipeline_tag: zero-shot-classification
license: apache-2.0
datasets:
- nli_tr
metrics:
- accuracy
widget:
- text: "Dolar yรผkselmeye devam ediyor."
candidate_labels: "ekonomi, siyaset, spor"
- text: "Senaryo รงok saรงmaydฤฑ, beฤendim diyemem."
candidate_labels: "olumlu, olumsuz"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convbert-base-turkish-mc4-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 |
| 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 |
| 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 |
| 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 |
| 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 |
| 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 |
| 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 |
| 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 |
| 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 |
| 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 |
| 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 |
| 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 |
| 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 |
| 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 |
| 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 |
| 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 |
| 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 |
| 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 |
| 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 |
| 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 |
| 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 |
| 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 |
| 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 |
| 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 |
| 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 |
| 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 |
| 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 |
| 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 |
| 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 |
| 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 |
| 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 |
| 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 |
| 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 |
| 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 |
| 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 |
| 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 |
| 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 |
| 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 |
| 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 |
| 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 |
| 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 |
| 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 |
| 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 |
| 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 |
| 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 |
| 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 |
| 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 |
| 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 |
| 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 |
| 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 |
| 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 |
| 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 |
| 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 |
| 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 |
| 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 |
| 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 |
| 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 |
| 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 |
| 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 |
| 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 |
| 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 |
| 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 |
| 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 |
| 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 |
| 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 |
| 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 |
| 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 |
| 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 |
| 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 |
| 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 |
| 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 |
| 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 |
| 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 |
| 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 |
| 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 |
| 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 |
| 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 |
| 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 |
| 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 |
| 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 |
| 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 |
| 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 |
| 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 |
| 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 |
| 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 |
| 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 |
| 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 |
| 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
formermagic/roberta-base-python-1m | bc5b171a877af5ffa621222d6b65eea696ab92aa | 2021-05-20T16:19:17.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"py",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | formermagic | null | formermagic/roberta-base-python-1m | 18 | null | transformers | 8,775 | ---
license: mit
language: py
thumbnail: https://avatars.githubusercontent.com/u/70610668?s=400&u=f0699303289113c125e8686338739d9a63d5826c&v=4
tags:
- roberta
- pytorch
---
# roberta-base-python-1m |
gagan3012/k2t-test3 | 59730f6ff36b5405b0409f8354a548ced295908a | 2021-07-09T19:57:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:WebNLG",
"dataset:Dart",
"transformers",
"keytotext",
"k2t",
"Keywords to Sentences",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gagan3012 | null | gagan3012/k2t-test3 | 18 | null | transformers | 8,776 | ---
language: "en"
thumbnail: "Keywords to Sentences"
tags:
- keytotext
- k2t
- Keywords to Sentences
license: "MIT"
datasets:
- WebNLG
- Dart
metrics:
- NLG
model-index:
- name: k2t-test3
---
#keytotext
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/notebooks/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
[](https://github.com/gagan3012/keytotext#api)
[](https://hub.docker.com/r/gagan30/keytotext)
[](https://huggingface.co/models?filter=keytotext)
[](https://keytotext.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/psf/black)

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
Potential use case can include:
- Marketing
- Search Engine Optimization
- Topic generation etc.
- Fine tuning of topic modeling models |
gayanin/bart-finetuned-pubmed | 4e130b77b7a026d1296b4bb33b428f777af96b86 | 2021-11-04T11:03:30.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-finetuned-pubmed | 18 | null | transformers | 8,777 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5363
- Rouge2 Precision: 0.3459
- Rouge2 Recall: 0.2455
- Rouge2 Fmeasure: 0.2731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.652 | 1.0 | 1125 | 1.5087 | 0.3647 | 0.2425 | 0.2772 |
| 1.4695 | 2.0 | 2250 | 1.5039 | 0.3448 | 0.2457 | 0.2732 |
| 1.3714 | 3.0 | 3375 | 1.4842 | 0.3509 | 0.2474 | 0.277 |
| 1.2734 | 4.0 | 4500 | 1.4901 | 0.3452 | 0.2426 | 0.2716 |
| 1.1853 | 5.0 | 5625 | 1.5152 | 0.3658 | 0.2371 | 0.2744 |
| 1.0975 | 6.0 | 6750 | 1.5133 | 0.3529 | 0.2417 | 0.2729 |
| 1.0448 | 7.0 | 7875 | 1.5203 | 0.3485 | 0.2464 | 0.275 |
| 0.9999 | 8.0 | 9000 | 1.5316 | 0.3437 | 0.2435 | 0.2719 |
| 0.9732 | 9.0 | 10125 | 1.5338 | 0.3464 | 0.2446 | 0.2732 |
| 0.954 | 10.0 | 11250 | 1.5363 | 0.3459 | 0.2455 | 0.2731 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ghadeermobasher/BC2GM-Gene-Modified_PubMedBERT | 27909a0b5cd4393095ef379fd2961a6cc7d10d8b | 2022-01-22T01:53:24.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC2GM-Gene-Modified_PubMedBERT | 18 | null | transformers | 8,778 | Entry not found |
google/t5-large-ssm-nqo | 3300329c72f0a6770409f50e5de16fb341026fb4 | 2021-06-23T01:42:15.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:natural_questions",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-large-ssm-nqo | 18 | null | transformers | 8,779 | ---
language: en
datasets:
- c4
- wikipedia
- natural_questions
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions).
**Note**: The model was fine-tuned on 90% of the train splits of [Natural Questions (NQ)](https://huggingface.co/datasets/natural_questions) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Natural Questions - Test Set
|Id | link | Exact Match |
|---|---|---|
|**T5-large**|**https://huggingface.co/google/t5-large-ssm-nqo**|**29.0**|
|T5-xxl|https://huggingface.co/google/t5-xxl-ssm-nqo|35.2|
|T5-3b|https://huggingface.co/google/t5-3b-ssm-nqo|31.7|
|T5-11b|https://huggingface.co/google/t5-11b-ssm-nqo|34.8|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-large-ssm-nqo")
t5_tok = AutoTokenizer.from_pretrained("google/t5-large-ssm-nqo")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
google/t5-xxl-ssm-tqao | e1690965ac9c779c487094e7b49dfd35de4f3ab7 | 2020-12-07T08:37:04.000Z | [
"pytorch",
"tf",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"dataset:trivia_qa",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-xxl-ssm-tqao | 18 | null | transformers | 8,780 | ---
language: en
datasets:
- c4
- wikipedia
- trivia_qa
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa).
**Note**: The model was fine-tuned on 90% of the train splits of [Trivia QA (TQA)](https://huggingface.co/datasets/trivia_qa) for 20k steps and validated on the held-out 10% of the train split.
Other community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Results on Trivia QA - Test Set
|Id | link | Exact Match |
|---|---|---|
|T5-11b|https://huggingface.co/google/t5-large-ssm-tqao|51.0|
|**T5-xxl**|**https://huggingface.co/google/t5-xxl-ssm-tqao**|**51.9**|
## Usage
The model can be used as follows for **closed book question answering**:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-tqao")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-tqao")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
```
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
guocheng98/HelsinkiNLP-FineTuned-Legal-es-zh | 59b46b01abdee23f942426e220ea532a5ec030b4 | 2021-06-24T22:54:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"zh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | guocheng98 | null | guocheng98/HelsinkiNLP-FineTuned-Legal-es-zh | 18 | null | transformers | 8,781 | ---
language:
- es
- zh
tags:
- translation
license: apache-2.0
---
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-es-zh](https://huggingface.co/Helsinki-NLP/opus-tatoeba-es-zh) on a dataset of legal domain constructed by the author himself.
# Intended uses & limitations
This model is the result of the master graduation thesis for the Tradumatics: Translation Technologies program at the Autonomous University of Barcelona.
Please refer to the GitHub repo created for this thesis for the full-text and relative open-sourced materials: https://github.com/guocheng98/MUTTT2020_TFM_ZGC
The thesis intends to explain various theories and certain algorithm details about neural machine translation, thus this fine-tuned model only serves as a hands-on practice example for that objective, without any intention of productive usage.
# Training and evaluation data
The dataset is constructed from the Chinese translation of Spanish Civil Code, Spanish Constitution, and many other laws & regulations found in the database China Law Info (ๅๅคงๆณๅฎ Beida Fabao), along with their source text found on Boletรญn Oficial del Estado and EUR-Lex.
There are 9972 sentence pairs constructed. 1000 are used for evaluation and the rest for training.
# Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10
- mixed_precision_training: Native AMP
- weight_decay: 0.01
- early_stopping_patience: 8
# Training results
Best validation loss achieved at step 5600.
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9584 | 0.36 | 400 | 2.6800 |
| 2.6402 | 0.71 | 800 | 2.5017 |
| 2.5038 | 1.07 | 1200 | 2.3907 |
| 2.3279 | 1.43 | 1600 | 2.2999 |
| 2.2258 | 1.78 | 2000 | 2.2343 |
| 2.1061 | 2.14 | 2400 | 2.1961 |
| 1.9279 | 2.5 | 2800 | 2.1569 |
| 1.9059 | 2.85 | 3200 | 2.1245 |
| 1.7491 | 3.21 | 3600 | 2.1227 |
| 1.6301 | 3.57 | 4000 | 2.1169 |
| 1.6871 | 3.92 | 4400 | 2.0979 |
| 1.5203 | 4.28 | 4800 | 2.1074 |
| 1.4646 | 4.63 | 5200 | 2.1024 |
| 1.4739 | 4.99 | 5600 | 2.0905 |
| 1.338 | 5.35 | 6000 | 2.0946 |
| 1.3152 | 5.7 | 6400 | 2.0974 |
| 1.306 | 6.06 | 6800 | 2.0985 |
| 1.1991 | 6.42 | 7200 | 2.0962 |
| 1.2113 | 6.77 | 7600 | 2.1092 |
| 1.1983 | 7.13 | 8000 | 2.1060 |
| 1.1238 | 7.49 | 8400 | 2.1102 |
| 1.1417 | 7.84 | 8800 | 2.1078 |
# Framework versions
- Transformers 4.7.0
- Pytorch 1.8.1+cu101
- Datasets 1.8.0
- Tokenizers 0.10.3
|
hfeng/bert_base_uncased_conll2003 | 0cd94e46aeeb6a589b59007d73720ec6311c1188 | 2021-08-23T14:14:40.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | hfeng | null | hfeng/bert_base_uncased_conll2003 | 18 | null | transformers | 8,782 | # BERT base model (uncased) fine-tuned on CoNLL-2003
This model was trained following the PyTorch token-classification example from Hugging Face: https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification.
There were no tweaks to the model or dataset.
|
howey/electra-large-squad | b200bdea716385dfb10250cf044958ae0574662b | 2021-06-21T06:12:30.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | howey | null | howey/electra-large-squad | 18 | null | transformers | 8,783 | Entry not found |
huggingtweets/beingandslime | 9cc7398a3dcb52316eaabf83cf3e428399e64e83 | 2021-05-21T20:17:38.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/beingandslime | 18 | null | transformers | 8,784 | ---
language: en
thumbnail: https://www.huggingtweets.com/beingandslime/1616648200015/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1348756593052176385/TjNU6-T__400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Evan (master saucier) ๐ค AI Bot </div>
<div style="font-size: 15px">@beingandslime bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@beingandslime's tweets](https://twitter.com/beingandslime).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 55 |
| Short tweets | 473 |
| Tweets kept | 2717 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hj6ebde/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beingandslime's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vtowykv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vtowykv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beingandslime')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/gohere4porn-onlinepete | 4836c01c942aa5138c179f42015d75cd693c1eba | 2021-07-07T06:07:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gohere4porn-onlinepete | 18 | null | transformers | 8,785 | ---
language: en
thumbnail: https://www.huggingtweets.com/gohere4porn-onlinepete/1625638031693/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1324123540556316673/YQjGLFLJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">im pete online & Grateful King</div>
<div style="text-align: center; font-size: 14px;">@gohere4porn-onlinepete</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from im pete online & Grateful King.
| Data | im pete online | Grateful King |
| --- | --- | --- |
| Tweets downloaded | 3190 | 2141 |
| Retweets | 94 | 557 |
| Short tweets | 1003 | 217 |
| Tweets kept | 2093 | 1367 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0274vc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gohere4porn-onlinepete's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rvkp85n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gohere4porn-onlinepete')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/nerdyboy77 | f215e190e0bf6c9cab591f52976c5ad61e91093c | 2021-05-22T16:03:13.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/nerdyboy77 | 18 | null | transformers | 8,786 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1275928927693930502/Pbhj-IWx_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Pranav ๐บ ๐ค AI Bot </div>
<div style="font-size: 15px">@nerdyboy77 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nerdyboy77's tweets](https://twitter.com/nerdyboy77).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1359 |
| Retweets | 396 |
| Short tweets | 120 |
| Tweets kept | 843 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bp0hino/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nerdyboy77's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/28folapu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/28folapu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nerdyboy77')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/thenewfiction | f5b2671ee142d968591d434bd0a8018433f24b7d | 2021-05-23T01:51:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/thenewfiction | 18 | null | transformers | 8,787 | ---
language: en
thumbnail: https://www.huggingtweets.com/thenewfiction/1617358718682/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000818526633/458c31969d9614eced26eaf87e34ded3_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">the new fiction ๐ค AI Bot </div>
<div style="font-size: 15px">@thenewfiction bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@thenewfiction's tweets](https://twitter.com/thenewfiction).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 1625 |
| Retweets | 11 |
| Short tweets | 99 |
| Tweets kept | 1515 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3j5k2uja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thenewfiction's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2e7x2n0q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2e7x2n0q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thenewfiction')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ietz/bert-base-uncased-finetuned-jira-jira-issue-titles-and-bodies | 56212b95c8d97343b63887d7bd038f2d717948e8 | 2022-02-04T14:56:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | ietz | null | ietz/bert-base-uncased-finetuned-jira-jira-issue-titles-and-bodies | 18 | null | transformers | 8,788 | Entry not found |
imvladikon/wav2vec2-xls-r-300m-hebrew | 98d752fcc0c0383852f1c3947d5fdae94aff2280 | 2022-03-23T18:30:08.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"he",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"model-index"
] | automatic-speech-recognition | false | imvladikon | null | imvladikon/wav2vec2-xls-r-300m-hebrew | 18 | 1 | transformers | 8,789 | ---
language:
- he
tags:
- automatic-speech-recognition
- generated_from_trainer
- he
- hf-asr-leaderboard
- robust-speech-event
model-index:
- name: wav2vec2-xls-r-300m-hebrew
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Custom Dataset
type: custom
args: he
metrics:
- name: Test WER
type: wer
value: 23.18
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the private datasets in 2 stages - firstly was fine-tuned on a small dataset with good samples Then the obtained model was fine-tuned on a large dataset with the small good dataset, with various samples from different sources, and with an unlabeled dataset that was weakly labeled using a previously trained model.
Small dataset:
| split |size(gb) | n_samples | duration(hrs)| |
|---|---|---|---|---|
|train|4.19| 20306 | 28 | |
|dev |1.05| 5076 | 7 | |
Large dataset:
| split |size(gb) | n_samples | duration(hrs)| |
|---|---|---|---|---|
|train|12.3| 90777 | 69 | |
|dev |2.39| 20246 | 14* | |
(*weakly labeled data wasn't used in validation set)
After firts training it achieves:
on small dataset
- Loss: 0.5438
- WER: 0.1773
on large dataset
- WER: 0.3811
after second training:
on small dataset
- WER: 0.1697
on large dataset
- Loss: 0.4502
- WER: 0.2318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
#### First training
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 3.15 | 1000 | 0.5203 | 0.4333 |
| 1.4284 | 6.31 | 2000 | 0.4816 | 0.3951 |
| 1.4284 | 9.46 | 3000 | 0.4315 | 0.3546 |
| 1.283 | 12.62 | 4000 | 0.4278 | 0.3404 |
| 1.283 | 15.77 | 5000 | 0.4090 | 0.3054 |
| 1.1777 | 18.93 | 6000 | 0.3893 | 0.3006 |
| 1.1777 | 22.08 | 7000 | 0.3968 | 0.2857 |
| 1.0994 | 25.24 | 8000 | 0.3892 | 0.2751 |
| 1.0994 | 28.39 | 9000 | 0.4061 | 0.2690 |
| 1.0323 | 31.54 | 10000 | 0.4114 | 0.2507 |
| 1.0323 | 34.7 | 11000 | 0.4021 | 0.2508 |
| 0.9623 | 37.85 | 12000 | 0.4032 | 0.2378 |
| 0.9623 | 41.01 | 13000 | 0.4148 | 0.2374 |
| 0.9077 | 44.16 | 14000 | 0.4350 | 0.2323 |
| 0.9077 | 47.32 | 15000 | 0.4515 | 0.2246 |
| 0.8573 | 50.47 | 16000 | 0.4474 | 0.2180 |
| 0.8573 | 53.63 | 17000 | 0.4649 | 0.2171 |
| 0.8083 | 56.78 | 18000 | 0.4455 | 0.2102 |
| 0.8083 | 59.94 | 19000 | 0.4587 | 0.2092 |
| 0.769 | 63.09 | 20000 | 0.4794 | 0.2012 |
| 0.769 | 66.25 | 21000 | 0.4845 | 0.2007 |
| 0.7308 | 69.4 | 22000 | 0.4937 | 0.2008 |
| 0.7308 | 72.55 | 23000 | 0.4920 | 0.1895 |
| 0.6927 | 75.71 | 24000 | 0.5179 | 0.1911 |
| 0.6927 | 78.86 | 25000 | 0.5202 | 0.1877 |
| 0.6622 | 82.02 | 26000 | 0.5266 | 0.1840 |
| 0.6622 | 85.17 | 27000 | 0.5351 | 0.1854 |
| 0.6315 | 88.33 | 28000 | 0.5373 | 0.1811 |
| 0.6315 | 91.48 | 29000 | 0.5331 | 0.1792 |
| 0.6075 | 94.64 | 30000 | 0.5390 | 0.1779 |
| 0.6075 | 97.79 | 31000 | 0.5459 | 0.1773 |
#### Second training
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.7 | 1000 | 0.5371 | 0.3811 |
| 1.3606 | 1.41 | 2000 | 0.5247 | 0.3902 |
| 1.3606 | 2.12 | 3000 | 0.5126 | 0.3859 |
| 1.3671 | 2.82 | 4000 | 0.5062 | 0.3828 |
| 1.3671 | 3.53 | 5000 | 0.4979 | 0.3672 |
| 1.3421 | 4.23 | 6000 | 0.4906 | 0.3816 |
| 1.3421 | 4.94 | 7000 | 0.4784 | 0.3651 |
| 1.328 | 5.64 | 8000 | 0.4810 | 0.3669 |
| 1.328 | 6.35 | 9000 | 0.4747 | 0.3597 |
| 1.3109 | 7.05 | 10000 | 0.4813 | 0.3808 |
| 1.3109 | 7.76 | 11000 | 0.4631 | 0.3561 |
| 1.2873 | 8.46 | 12000 | 0.4603 | 0.3431 |
| 1.2873 | 9.17 | 13000 | 0.4579 | 0.3533 |
| 1.2661 | 9.87 | 14000 | 0.4471 | 0.3365 |
| 1.2661 | 10.58 | 15000 | 0.4584 | 0.3437 |
| 1.249 | 11.28 | 16000 | 0.4461 | 0.3454 |
| 1.249 | 11.99 | 17000 | 0.4482 | 0.3367 |
| 1.2322 | 12.69 | 18000 | 0.4464 | 0.3335 |
| 1.2322 | 13.4 | 19000 | 0.4427 | 0.3454 |
| 1.22 | 14.1 | 20000 | 0.4440 | 0.3395 |
| 1.22 | 14.81 | 21000 | 0.4459 | 0.3378 |
| 1.2044 | 15.51 | 22000 | 0.4406 | 0.3199 |
| 1.2044 | 16.22 | 23000 | 0.4398 | 0.3155 |
| 1.1913 | 16.92 | 24000 | 0.4237 | 0.3150 |
| 1.1913 | 17.63 | 25000 | 0.4287 | 0.3279 |
| 1.1705 | 18.34 | 26000 | 0.4253 | 0.3103 |
| 1.1705 | 19.04 | 27000 | 0.4234 | 0.3098 |
| 1.1564 | 19.75 | 28000 | 0.4174 | 0.3076 |
| 1.1564 | 20.45 | 29000 | 0.4260 | 0.3160 |
| 1.1461 | 21.16 | 30000 | 0.4235 | 0.3036 |
| 1.1461 | 21.86 | 31000 | 0.4309 | 0.3055 |
| 1.1285 | 22.57 | 32000 | 0.4264 | 0.3006 |
| 1.1285 | 23.27 | 33000 | 0.4201 | 0.2880 |
| 1.1135 | 23.98 | 34000 | 0.4131 | 0.2975 |
| 1.1135 | 24.68 | 35000 | 0.4202 | 0.2849 |
| 1.0968 | 25.39 | 36000 | 0.4105 | 0.2888 |
| 1.0968 | 26.09 | 37000 | 0.4210 | 0.2834 |
| 1.087 | 26.8 | 38000 | 0.4123 | 0.2843 |
| 1.087 | 27.5 | 39000 | 0.4216 | 0.2803 |
| 1.0707 | 28.21 | 40000 | 0.4161 | 0.2787 |
| 1.0707 | 28.91 | 41000 | 0.4186 | 0.2740 |
| 1.0575 | 29.62 | 42000 | 0.4118 | 0.2845 |
| 1.0575 | 30.32 | 43000 | 0.4243 | 0.2773 |
| 1.0474 | 31.03 | 44000 | 0.4221 | 0.2707 |
| 1.0474 | 31.73 | 45000 | 0.4138 | 0.2700 |
| 1.0333 | 32.44 | 46000 | 0.4102 | 0.2638 |
| 1.0333 | 33.15 | 47000 | 0.4162 | 0.2650 |
| 1.0191 | 33.85 | 48000 | 0.4155 | 0.2636 |
| 1.0191 | 34.56 | 49000 | 0.4129 | 0.2656 |
| 1.0087 | 35.26 | 50000 | 0.4157 | 0.2632 |
| 1.0087 | 35.97 | 51000 | 0.4090 | 0.2654 |
| 0.9901 | 36.67 | 52000 | 0.4183 | 0.2587 |
| 0.9901 | 37.38 | 53000 | 0.4251 | 0.2648 |
| 0.9795 | 38.08 | 54000 | 0.4229 | 0.2555 |
| 0.9795 | 38.79 | 55000 | 0.4176 | 0.2546 |
| 0.9644 | 39.49 | 56000 | 0.4223 | 0.2513 |
| 0.9644 | 40.2 | 57000 | 0.4244 | 0.2530 |
| 0.9534 | 40.9 | 58000 | 0.4175 | 0.2538 |
| 0.9534 | 41.61 | 59000 | 0.4213 | 0.2505 |
| 0.9397 | 42.31 | 60000 | 0.4275 | 0.2565 |
| 0.9397 | 43.02 | 61000 | 0.4315 | 0.2528 |
| 0.9269 | 43.72 | 62000 | 0.4316 | 0.2501 |
| 0.9269 | 44.43 | 63000 | 0.4247 | 0.2471 |
| 0.9175 | 45.13 | 64000 | 0.4376 | 0.2469 |
| 0.9175 | 45.84 | 65000 | 0.4335 | 0.2450 |
| 0.9026 | 46.54 | 66000 | 0.4336 | 0.2452 |
| 0.9026 | 47.25 | 67000 | 0.4400 | 0.2427 |
| 0.8929 | 47.95 | 68000 | 0.4382 | 0.2429 |
| 0.8929 | 48.66 | 69000 | 0.4361 | 0.2415 |
| 0.8786 | 49.37 | 70000 | 0.4413 | 0.2398 |
| 0.8786 | 50.07 | 71000 | 0.4392 | 0.2415 |
| 0.8714 | 50.78 | 72000 | 0.4345 | 0.2406 |
| 0.8714 | 51.48 | 73000 | 0.4475 | 0.2402 |
| 0.8589 | 52.19 | 74000 | 0.4473 | 0.2374 |
| 0.8589 | 52.89 | 75000 | 0.4457 | 0.2357 |
| 0.8493 | 53.6 | 76000 | 0.4462 | 0.2366 |
| 0.8493 | 54.3 | 77000 | 0.4494 | 0.2356 |
| 0.8395 | 55.01 | 78000 | 0.4472 | 0.2352 |
| 0.8395 | 55.71 | 79000 | 0.4490 | 0.2339 |
| 0.8295 | 56.42 | 80000 | 0.4489 | 0.2318 |
| 0.8295 | 57.12 | 81000 | 0.4469 | 0.2320 |
| 0.8225 | 57.83 | 82000 | 0.4478 | 0.2321 |
| 0.8225 | 58.53 | 83000 | 0.4525 | 0.2326 |
| 0.816 | 59.24 | 84000 | 0.4532 | 0.2316 |
| 0.816 | 59.94 | 85000 | 0.4502 | 0.2318 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
janpase97/codeformer-pretrained | 4ad0aa950d6abc8e1b5b0176dc398a6ea84003f7 | 2022-03-27T07:57:53.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | janpase97 | null | janpase97/codeformer-pretrained | 18 | null | transformers | 8,790 | Entry not found |
jason9693/soongsil-bert-small | f45d1642a040d497a8640d30856ba10ad53e1003 | 2022-07-13T05:33:10.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"ko",
"transformers",
"autotrain_compatible"
] | fill-mask | false | jason9693 | null | jason9693/soongsil-bert-small | 18 | null | transformers | 8,791 | ---
language: ko
widget:
- ์ญ์ค๋ํ๊ต ๊ธ๋ก๋ฒ<mask>ํ๋ถ
--- |
justin871030/bert-base-uncased-goemotions-group-finetuned | 5db3fc03ed13fb6142250a4630edf07e88f53268 | 2022-02-09T17:22:07.000Z | [
"pytorch",
"bert",
"en",
"dataset:go_emotions",
"transformers",
"go-emotion",
"text-classification",
"license:mit"
] | text-classification | false | justin871030 | null | justin871030/bert-base-uncased-goemotions-group-finetuned | 18 | null | transformers | 8,792 | ---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! ๐๐"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 70%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions) |
l3cube-pune/marathi-albert | 611be7da5adc388935fce74c71af9336526a5c20 | 2022-06-26T15:15:05.000Z | [
"pytorch",
"albert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/marathi-albert | 18 | null | transformers | 8,793 | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaAlBERT
MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
```
|
lucasresck/bert-base-cased-ag-news | 3535a9ed0f6b4cbd8608e220818bb9fad87a9714 | 2021-11-09T02:11:29.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:ag_news",
"transformers",
"classification",
"license:mit"
] | text-classification | false | lucasresck | null | lucasresck/bert-base-cased-ag-news | 18 | null | transformers | 8,794 | ---
language:
- en
license: mit
tags:
- bert
- classification
datasets:
- ag_news
metrics:
- accuracy
- f1
- recall
- precision
widget:
- text: "Is it soccer or football?"
example_title: "Sports"
- text: "A new version of Ubuntu was released."
example_title: "Sci/Tech"
---
# bert-base-cased-ag-news
BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy.
### How to use
Here is how to use this model to classify a given text:
```python
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news')
model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news')
text = "Is it soccer or football?"
encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512)
output = model(**encoded_input)
```
### Limitations and bias
Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model."
## Evaluation results
```
precision recall f1-score support
0 0.9539 0.9584 0.9562 1900
1 0.9884 0.9879 0.9882 1900
2 0.9251 0.9095 0.9172 1900
3 0.9127 0.9242 0.9184 1900
accuracy 0.9450 7600
macro avg 0.9450 0.9450 0.9450 7600
weighted avg 0.9450 0.9450 0.9450 7600
```
|
michaelrglass/albert-base-rci-tabmcq-col | 32b590c0503f6ca74f5609c53e11d97e749a62ff | 2021-06-16T16:07:54.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | michaelrglass | null | michaelrglass/albert-base-rci-tabmcq-col | 18 | null | transformers | 8,795 | Entry not found |
mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization | d1c20611b10deaf767b3c1bc819198cca1b957aa | 2020-12-11T21:52:51.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | mrm8488 | null | mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization | 18 | null | transformers | 8,796 | ---
language: en
license: apache-2.0
datasets:
- cnn_dailymail
tags:
- summarization
---
# Bert-mini2Bert-mini Summarization with ๐คEncoderDecoder Framework
This model is a warm-started *BERT2BERT* ([mini](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4)) model fine-tuned on the *CNN/Dailymail* summarization dataset.
The model achieves a **16.51** ROUGE-2 score on *CNN/Dailymail*'s test dataset.
For more details on how the model was fine-tuned, please refer to
[this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
## Results on test set ๐
| Metric | # Value |
| ------ | --------- |
| **ROUGE-2** | **16.51** |
## Model in Action ๐
```python
from transformers import BertTokenizerFast, EncoderDecoderModel
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizerFast.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization')
model = EncoderDecoderModel.from_pretrained('mrm8488/bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization').to(device)
def generate_summary(text):
# cut off at BERT max length 512
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "your text to be summarized here..."
generate_summary(text)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/bert-tiny-finetuned-yahoo_answers_topics | f94a9c5a7490e9b6739aaa8b25bd03d71c702f24 | 2021-05-20T00:40:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mrm8488 | null | mrm8488/bert-tiny-finetuned-yahoo_answers_topics | 18 | 1 | transformers | 8,797 | Entry not found |
mrm8488/electricidad-base-finetuned-squadv1-es | b2e6be8b65957af74d8797826b06cc3fee70def6 | 2020-08-21T22:38:53.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/electricidad-base-finetuned-squadv1-es | 18 | null | transformers | 8,798 | Entry not found |
mschwab/va_bert_classification | 2985956612d34fc50e84ab8764816557a03bb090 | 2021-11-22T08:38:51.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | mschwab | null | mschwab/va_bert_classification | 18 | null | transformers | 8,799 | Fine-tuned bert-base model for binary vossian antonomasia detection on sentence level. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.