modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tanfiona/unicausal-seq-baseline | 1f7b3a659ccaea01bdd97445fae9acafab5cc347 | 2022-07-15T09:55:29.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"license:unknown"
] | text-classification | false | tanfiona | null | tanfiona/unicausal-seq-baseline | 26 | null | transformers | 7,600 | ---
language: en
license: unknown
widget:
- text: "She fell because he pushed her."
example_title: "Causal Example 1"
- text: "He pushed her, causing her to fall."
example_title: "Causal Example 2"
- text: "She fell onto him."
example_title: "Non-causal Example 1"
- text: "He is Billy and he pushed her."
example_title: "Non-causal Example 2"
---
Binary causal sentence classification:
* LABEL_0 = Non-causal
* LABEL_1 = Causal
Trained on multiple datasets. |
Gunulhona/tbnlimodel_v2 | 12ead951a7cb226d621a95f5db670e5cce7e9ace | 2022-07-20T07:16:08.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | Gunulhona | null | Gunulhona/tbnlimodel_v2 | 26 | null | transformers | 7,601 | Entry not found |
google/ncsnpp-bedroom-256 | c62041310b619c4fd85b78865f91ceca135c3993 | 2022-07-21T14:59:57.000Z | [
"diffusers",
"arxiv:2011.13456",
"pytorch",
"unconditional-image-generation",
"license:apache-2.0"
] | unconditional-image-generation | false | google | null | google/ncsnpp-bedroom-256 | 26 | null | diffusers | 7,602 | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
**Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456)
**Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole
**Abstract**:
*Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
## Inference
*SDE* models can use **continous** noise schedulers such as:
- [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py)
for inference.
See the following code:
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
model_id = "google/ncsnpp-bedroom-256"
# load model and scheduler
sde_ve = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = sde_ve()["sample"]
# save image
image[0].save("sde_ve_generated_image.png")
```
Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py)
for more details on how to write your own denoising loop.
For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Samples
1. 
2. 
3. 
4.  |
Evelyn18/roberta-base-spanish-squades-robertav2 | bea436a9c0ba6aeeab48f845f123b6c137300781 | 2022-07-21T16:57:29.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-robertav2 | 26 | null | transformers | 7,603 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-robertav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-robertav2
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 1.8825 |
| No log | 2.0 | 12 | 1.7787 |
| No log | 3.0 | 18 | 2.0521 |
| No log | 4.0 | 24 | 2.2991 |
| No log | 5.0 | 30 | 2.4029 |
| No log | 6.0 | 36 | 2.4358 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
conan1024hao/cjkbert-base | 02c8738f7215aaf7d87f70c0c9cd8085333201fb | 2022-07-24T14:31:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | conan1024hao | null | conan1024hao/cjkbert-base | 26 | 1 | transformers | 7,604 | ---
license: cc-by-sa-4.0
---
|
noob123/original_model | ae53f53a9c91888adc61bf8e2b1ef48fe7960dc9 | 2022-07-26T15:49:24.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | noob123 | null | noob123/original_model | 26 | null | transformers | 7,605 | Entry not found |
Den4ikAI/dialog_rugpt3 | a9832b450cbeac5cf8973d22cf0997e759372ff4 | 2022-07-26T19:13:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | Den4ikAI | null | Den4ikAI/dialog_rugpt3 | 26 | null | transformers | 7,606 | ---
license: mit
laungage: rus
---
RUGPT-3 обученная на диалогах с yandex toloka, flibusta
Для получения ответа в модели необходимо ввести такой формат данных:
"- Привет\n-"
|
derwahnsinn/gpt2-mediumTarantino | a1b97c5822db4b5eec036df115b1d850152848f5 | 2022-07-28T19:03:33.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediumTarantino | 26 | null | transformers | 7,607 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediumTarantino
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediumTarantino
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0375
- eval_runtime: 23.1892
- eval_samples_per_second: 61.322
- eval_steps_per_second: 7.676
- epoch: 21.0
- step: 3738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 29
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ashishraics/deberta_v3_large_mlm_feedback_prize | c8401bf9cea1d3fc4d474b7f841fcc59f14ef9e6 | 2022-07-29T16:42:21.000Z | [
"pytorch",
"deberta-v2",
"feature-extraction",
"transformers"
] | feature-extraction | false | ashishraics | null | ashishraics/deberta_v3_large_mlm_feedback_prize | 26 | null | transformers | 7,608 | Entry not found |
bloom-testing/test-bloomd-350m-8bit-model | 2621f4d92a66bdc30ce2e3676227258ec9ba15f5 | 2022-07-29T23:55:22.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-8bit-model | 26 | null | transformers | 7,609 | Entry not found |
ArseniyBolotin/bert-multi-PAD-ner | 6b99a82f9de864e85fb891f329e4e72845118137 | 2021-05-18T17:06:50.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ArseniyBolotin | null | ArseniyBolotin/bert-multi-PAD-ner | 25 | null | transformers | 7,610 | Entry not found |
Ayran/DialoGPT-medium-harry-potter-1-through-3 | 6bc7a1680c88da1e8a47b237961287daa3ba9608 | 2021-10-12T17:14:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ayran | null | Ayran/DialoGPT-medium-harry-potter-1-through-3 | 25 | null | transformers | 7,611 | ---
tags:
- conversational
---
#DialoGPT medium model (Harry Potter 1-3) |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | 1f6d9643228adb8463a3dee24b369ec41a3235de | 2021-10-17T11:17:53.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | 25 | null | transformers | 7,612 | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID MADAR Corpus6 Model
## Model description
**CAMeLBERT-Mix DID MADAR Corpus6 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [MADAR Corpus 6](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar6')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'CAI', 'score': 0.9996405839920044},
{'label': 'DOH', 'score': 0.9997853636741638}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Cameron/BERT-mdgender-convai-ternary | 430cb08041e7752a2a0d9678957a0c1c995c1990 | 2021-05-18T17:31:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-mdgender-convai-ternary | 25 | null | transformers | 7,613 | Entry not found |
Geotrend/bert-base-en-cased | 4c6b9131287aaec6e926821c2ccc7eb82dbba0a4 | 2021-05-18T19:03:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-cased | 25 | null | transformers | 7,614 | ---
language: en
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/distilbert-base-de-cased | 52e90cc8094abc1c3cdf6fc9fedbc31065f535eb | 2021-08-16T13:33:05.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"de",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-de-cased | 25 | null | transformers | 7,615 | ---
language: de
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-de-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-de-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
HelloRusk/t5-base-parasci | 5847de1c218ee95abbfc20370d4ad19f310ec33d | 2021-06-23T02:24:58.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | HelloRusk | null | HelloRusk/t5-base-parasci | 25 | null | transformers | 7,616 | Entry not found |
Helsinki-NLP/opus-mt-ase-en | 31dc31232a3b66fc2802c779cf109a1330440ab3 | 2021-09-09T21:26:26.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ase",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ase-en | 25 | null | transformers | 7,617 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ase-en
* source languages: ase
* target languages: en
* OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.en | 99.5 | 0.997 |
|
Helsinki-NLP/opus-mt-bzs-en | 4a0238e6463445a99590c0abe7aed5f2f95e064d | 2021-09-09T21:27:59.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bzs",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-bzs-en | 25 | null | transformers | 7,618 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-bzs-en
* source languages: bzs
* target languages: en
* OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.en | 44.5 | 0.605 |
|
Helsinki-NLP/opus-mt-da-fr | 186e4c938bc1744a9ddbd67073fe572c93a494c8 | 2021-09-09T21:30:03.000Z | [
"pytorch",
"marian",
"text2text-generation",
"da",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-da-fr | 25 | null | transformers | 7,619 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-da-fr
* source languages: da
* target languages: fr
* OPUS readme: [da-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.fr | 62.2 | 0.751 |
|
Helsinki-NLP/opus-mt-en-eu | 74a16b460e9cf136feb59f58338ee491e087de8a | 2021-01-18T08:07:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"eu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-eu | 25 | 1 | transformers | 7,620 | ---
language:
- en
- eu
tags:
- translation
license: apache-2.0
---
### eng-eus
* source group: English
* target group: Basque
* OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 31.8 | 0.590 |
### System Info:
- hf_name: eng-eus
- source_languages: eng
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'eu']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: eus
- short_pair: en-eu
- chrF2_score: 0.59
- bleu: 31.8
- brevity_penalty: 0.9440000000000001
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: eu
- prefer_old: False
- long_pair: eng-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-gem | 50d7941693cfafef0c11fd8a72297571f9df7a20 | 2021-01-18T08:08:05.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"lb",
"yi",
"gem",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gem | 25 | 1 | transformers | 7,621 | ---
language:
- en
- da
- sv
- af
- nn
- fy
- fo
- de
- nb
- nl
- is
- lb
- yi
- gem
tags:
- translation
license: apache-2.0
---
### eng-gem
* source group: English
* target group: Germanic languages
* OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 |
| news-test2008-engdeu.eng.deu | 21.1 | 0.511 |
| newstest2009-engdeu.eng.deu | 20.5 | 0.516 |
| newstest2010-engdeu.eng.deu | 22.5 | 0.526 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 20.8 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.6 | 0.534 |
| newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 |
| newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 |
| newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 |
| newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 |
| Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 |
| Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 |
| Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 |
| Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 |
| Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 |
| Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 |
| Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 |
| Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 |
| Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 |
| Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 |
| Tatoeba-test.eng.multi | 46.5 | 0.641 |
| Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 |
| Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 |
| Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 |
| Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 |
| Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 |
| Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 |
| Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 |
| Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 |
| Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 |
| Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 |
### System Info:
- hf_name: eng-gem
- source_languages: eng
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gem
- short_pair: en-gem
- chrF2_score: 0.6409999999999999
- bleu: 46.5
- brevity_penalty: 0.9790000000000001
- ref_len: 73328.0
- src_name: English
- tgt_name: Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gem
- prefer_old: False
- long_pair: eng-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-gv | 75304df76c1de9f4a2502e13edefe5e83b60808b | 2021-09-09T21:35:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"gv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-gv | 25 | null | transformers | 7,622 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-gv
* source languages: en
* target languages: gv
* OPUS readme: [en-gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.en.gv | 70.1 | 0.885 |
|
Helsinki-NLP/opus-mt-es-fi | 418cdb680fcf1499ec9f72e0eece03ea67322e4c | 2021-09-09T21:42:19.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-fi | 25 | null | transformers | 7,623 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fi
* source languages: es
* target languages: fi
* OPUS readme: [es-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.fi | 44.4 | 0.672 |
|
Helsinki-NLP/opus-mt-es-mk | 0313f02ea08ecfcb2abb2c21696918e0d37e2eed | 2021-01-18T08:26:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"mk",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-mk | 25 | null | transformers | 7,624 | ---
language:
- es
- mk
tags:
- translation
license: apache-2.0
---
### spa-mkd
* source group: Spanish
* target group: Macedonian
* OPUS readme: [spa-mkd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): mkd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.mkd | 48.2 | 0.681 |
### System Info:
- hf_name: spa-mkd
- source_languages: spa
- target_languages: mkd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'mk']
- src_constituents: {'spa'}
- tgt_constituents: {'mkd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: mkd
- short_pair: es-mk
- chrF2_score: 0.6809999999999999
- bleu: 48.2
- brevity_penalty: 1.0
- ref_len: 1073.0
- src_name: Spanish
- tgt_name: Macedonian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: mk
- prefer_old: False
- long_pair: spa-mkd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ho-en | 30f225f4db385984a9c95e468faaeb54890e606c | 2021-09-09T22:10:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ho",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ho-en | 25 | null | transformers | 7,625 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ho-en
* source languages: ho
* target languages: en
* OPUS readme: [ho-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ho-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ho.en | 26.8 | 0.428 |
|
Helsinki-NLP/opus-mt-hr-fi | 0ad78408cbbecab0c12a5f4062917301dcb7aff9 | 2021-09-09T22:10:17.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hr",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-hr-fi | 25 | null | transformers | 7,626 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-hr-fi
* source languages: hr
* target languages: fi
* OPUS readme: [hr-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.fi | 25.0 | 0.519 |
|
Helsinki-NLP/opus-mt-lua-en | 8c17410d2f28979693fc972266320bb3db646712 | 2021-09-10T13:56:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"lua",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-lua-en | 25 | null | transformers | 7,627 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-en
* source languages: lua
* target languages: en
* OPUS readme: [lua-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.en | 34.4 | 0.502 |
|
Helsinki-NLP/opus-mt-luo-en | 7c29e33c07a0a88ec5b54b6df68ac190113147a1 | 2021-09-10T13:56:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"luo",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-luo-en | 25 | null | transformers | 7,628 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-luo-en
* source languages: luo
* target languages: en
* OPUS readme: [luo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/luo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.luo.en | 29.1 | 0.452 |
|
Helsinki-NLP/opus-mt-pag-en | bf41e954aa08a387ae6b72e96d5d7f0bd4e630b0 | 2021-09-10T14:00:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"pag",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-pag-en | 25 | null | transformers | 7,629 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pag-en
* source languages: pag
* target languages: en
* OPUS readme: [pag-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.en | 42.4 | 0.580 |
|
Helsinki-NLP/opus-mt-run-en | 28b9926db36126b5833968eea417490d5e4d70a1 | 2021-09-10T14:02:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"run",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-run-en | 25 | null | transformers | 7,630 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-run-en
* source languages: run
* target languages: en
* OPUS readme: [run-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.en | 42.7 | 0.583 |
|
Helsinki-NLP/opus-mt-sk-sv | 003fa8e1c1a935225541bcacf43b5bc1e0f5870b | 2021-09-10T14:03:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sk",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sk-sv | 25 | null | transformers | 7,631 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sk-sv
* source languages: sk
* target languages: sv
* OPUS readme: [sk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.sv | 33.1 | 0.544 |
|
Helsinki-NLP/opus-mt-tw-fi | d7a5caa6848a5705a603e56ae16f2853a68ab5c1 | 2021-09-11T10:50:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tw",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tw-fi | 25 | null | transformers | 7,632 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tw-fi
* source languages: tw
* target languages: fi
* OPUS readme: [tw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.fi | 25.6 | 0.488 |
|
Intel/bert-base-uncased-sparse-70-unstructured | 49d5ae78de4226eb67c37d7b119786732bd6a364 | 2021-05-24T12:42:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"en",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Intel | null | Intel/bert-base-uncased-sparse-70-unstructured | 25 | null | transformers | 7,633 | ---
language: en
---
# Sparse BERT base model (uncased)
Pretrained model pruned to 70% sparsity.
The model is a pruned version of the [BERT base model](https://huggingface.co/bert-base-uncased).
## Intended Use
The model can be used for fine-tuning to downstream tasks with sparsity already embeded to the model.
To keep the sparsity a mask should be added to each sparse weight blocking the optimizer from updating the zeros. |
ItelAi/Chatbot | af2a3bc7dfaf094f1ff1d4a6574c5542eb3c2194 | 2021-07-20T01:27:08.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ItelAi | null | ItelAi/Chatbot | 25 | null | transformers | 7,634 | Entry not found |
JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | fe2524d2ab745ab0f235804d57155a7e7cfe10ae | 2021-09-23T15:48:56.000Z | [
"pytorch",
"dataset:Libri2Mix",
"dataset:sep_clean",
"asteroid",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | 25 | null | asteroid | 7,635 | ---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- Libri2Mix
- sep_clean
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. |
KakoSi/Smolmm3 | 53dc695c4bf9a794269a5b3b123adfe9f56e8c0a | 2021-07-17T08:15:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | KakoSi | null | KakoSi/Smolmm3 | 25 | null | transformers | 7,636 | ---
tags:
- conversational
---
#my awesome model |
LeBenchmark/wav2vec2-FR-2.6K-base | 7353dcd1ba8eb09a9b0726c68b2464222086b3c2 | 2021-11-30T04:23:14.000Z | [
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | LeBenchmark | null | LeBenchmark/wav2vec2-FR-2.6K-base | 25 | null | transformers | 7,637 | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark: wav2vec2 large model trained on 2.6K hours of French speech (no spontaneous speech)
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
Norod78/hebrew_poetry-gpt_neo-small | 357dac0c1ec3222f4603af5d6b29e86f4bd3c7fd | 2022-07-04T07:24:46.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"he",
"transformers",
"license:mit"
] | text-generation | false | Norod78 | null | Norod78/hebrew_poetry-gpt_neo-small | 25 | null | transformers | 7,638 | ---
language: he
thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "פעם אחת לפני שנ"
- text: "הים כחול ואני ח"
- text: "שם היצירה:"
- text: "כשהמכונות"
license: mit
---
# hebrew_poetry-gpt_neo-small
Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo).
Fine-tuning was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen).
## Datasets
1. Text from [New stage](http://stage.co.il/)
2. A dataset containing Hebrew lyrics
|
SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask_finetune | 08bb81ae32a9404314c0a384b4e6fdf64c4f3b6c | 2021-06-23T04:47:08.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask_finetune | 25 | 1 | transformers | 7,639 | ---
tags:
- summarization
widget:
- text: "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
---
# CodeTrans model for code documentation generation python
Pretrained model on programming language python using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_python_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/python/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_multitask_en_sv | 1e83e8b6c28ebbf214e43e5372b3c51f42e3fd1f | 2021-06-23T11:00:55.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"English Swedish",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation English Swedish model",
"autotrain_compatible"
] | text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_multitask_en_sv | 25 | null | transformers | 7,640 |
---
language: English Swedish
tags:
- translation English Swedish model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "whereas enlargement to Bulgaria and Romania should be effective in 2007,"
---
# legal_t5_small_multitask_en_sv model
Model on translating legal text from English to Swedish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsupervised task where the model followed the task of prediction in a masked language model.
## Model description
No pretraining is involved in case of legal_t5_small_multitask_en_sv model, rather the unsupervised task is added with all the translation task
to realize the multitask learning scenario.
## Intended uses & limitations
The model could be used for translation of legal texts from English to Swedish.
### How to use
Here is how to use this model to translate legal text from English to Swedish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_multitask_en_sv"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_multitask_en_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)
en_text = "whereas enlargement to Bulgaria and Romania should be effective in 2007,"
pipeline([en_text], max_length=512)
```
## Training data
The legal_t5_small_multitask_en_sv model (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_multitask_en_sv | 47.968|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SophieTr/fine-tune-Pegasus-large | 2c7c8250e812074b8859e6775e4852cb8944e61a | 2022-01-26T07:56:10.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | SophieTr | null | SophieTr/fine-tune-Pegasus-large | 25 | 1 | transformers | 7,641 | ---
tags:
- generated_from_trainer
model-index:
- name: fine-tune-Pegasus-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Pegasus-large
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.35e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
TalTechNLP/voxlingua107-epaca-tdnn-ce | 2b36d180fc2664918fae6611217cce975391b71c | 2021-11-04T13:37:25.000Z | [
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
] | audio-classification | false | TalTechNLP | null | TalTechNLP/voxlingua107-epaca-tdnn-ce | 25 | 2 | speechbrain | 7,642 | ---
language: multilingual
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- VoxLingua107
license: "apache-2.0"
datasets:
- VoxLingua107
metrics:
- Accuracy
widget:
- example_title: English Sample
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
---
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn-ce", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
-3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
-2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
-3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
-2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
-2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
-3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
-2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
-2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
-3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
-2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
-4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
-3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
-2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
-2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
-2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
-3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
-2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
-2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
-2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
-3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
-2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
Vaibhavbrkn/grammer_classiffication | 022d5425a2c542baae2e8ec917d956ae19245684 | 2021-10-23T06:20:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Vaibhavbrkn | null | Vaibhavbrkn/grammer_classiffication | 25 | null | transformers | 7,643 | Entry not found |
abhi1nandy2/EManuals_BERT | 2e220e9b39d59b3739bdf5f62f0f1fe7634a84fe | 2022-01-17T17:12:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"English",
"transformers",
"EManuals",
"customer support",
"QA",
"autotrain_compatible"
] | fill-mask | false | abhi1nandy2 | null | abhi1nandy2/EManuals_BERT | 25 | null | transformers | 7,644 | ---
language:
- English
tags:
- EManuals
- customer support
- QA
- bert
---
Refer to https://aclanthology.org/2021.findings-emnlp.392/ for the paper and https://sites.google.com/view/emanualqa/home for the project website
## Citation
Please cite the work if you would like to use it.
```
@inproceedings{nandy-etal-2021-question-answering,
title = "Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based {QA} Framework",
author = "Nandy, Abhilash and
Sharma, Soumya and
Maddhashiya, Shubham and
Sachdeva, Kapil and
Goyal, Pawan and
Ganguly, NIloy",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.392",
doi = "10.18653/v1/2021.findings-emnlp.392",
pages = "4600--4609",
abstract = "Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering. This can be primarily attributed to the absence of standard benchmark datasets. In this paper, we meticulously create a large amount of data connected with E-manuals and develop a suitable algorithm to exploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals, and pretrain RoBERTa on this large corpus. We create various benchmark QA datasets which include question answer pairs curated by experts based upon two E-manuals, real user questions from Community Question Answering Forum pertaining to E-manuals etc. We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices. Built upon the pretrained RoBERTa, it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section. For E-Manual annotated question-answer pairs, we show an improvement of about 40{\%} in ROUGE-L F1 scores over most competitive baseline. We perform a detailed ablation study and establish the versatility of EMQAP across different circumstances. The code and datasets are shared at https://github.com/abhi1nandy2/EMNLP-2021-Findings, and the corresponding project website is https://sites.google.com/view/emanualqa/home.",
}
``` |
addy88/t5-grammar-correction | c33cdd81a911fce70375248118a7e617387fb4ad | 2022-01-17T12:09:14.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | addy88 | null | addy88/t5-grammar-correction | 25 | 1 | transformers | 7,645 | ### How to use
Here is how to use this model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("addy88/t5-grammar-correction")
model = AutoModelForSeq2SeqLM.from_pretrained("addy88/t5-grammar-correction")
input_ids = tokenizer('grammar: This sentences has has bads grammar.', return_tensors='pt').input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
allenai/hvila-row-layoutlm-finetuned-grotoap2 | cc7b102ef8cf72c08af41174b54e0a8edff9b0b6 | 2021-09-27T23:01:31.000Z | [
"pytorch",
"hierarchical_model",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | allenai | null | allenai/hvila-row-layoutlm-finetuned-grotoap2 | 25 | null | transformers | 7,646 | Entry not found |
amoux/roberta-cord19-1M7k | 9420b723c2a278b030b94d5ba163c1a2c9b3c3ed | 2021-05-20T14:07:22.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"english",
"transformers",
"autotrain_compatible"
] | fill-mask | false | amoux | null | amoux/roberta-cord19-1M7k | 25 | null | transformers | 7,647 | ---
language: english
thumbnail: https://github.githubassets.com/images/icons/emoji/unicode/2695.png
widget:
- text: "Lung infiltrates cause significant morbidity and mortality in immunocompromised <mask>."
- text: "Tuberculosis appears to be an important <mask> in endemic regions especially in the non-HIV, non-hematologic malignancy group."
- text: "For vector-transmitted diseases this places huge significance on vector mortality rates as vectors usually don't <mask> an infection and instead remain infectious for life."
- text: "The lung lesions were characterized by bronchointerstitial pneumonia with accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar lumens and peribronchiolar/perivascular infiltration of inflammatory cells."
---
# roberta-cord19-1M7k

> This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences.
The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below).
#### Usage
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")
fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
```
- Predicted tokens
```bash
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
'score': 0.6273621320724487,
'token': 660,
'token_str': 'Ġpatients'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
'score': 0.19800445437431335,
'token': 1868,
'token_str': 'Ġindividuals'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
'score': 0.022069649770855904,
'token': 1471,
'token_str': 'Ġanimals'}]
```
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: `a36fe181 | 8fbea927`
- text-key: `body_text`
- subsets (*total*: `13,202`):
- *biorxiv_medrxiv*: `803`
- *comm_use_subset*: `9000`
- *pmc_custom_license*: `1426`
- *noncomm_use_subset*: `1973`
- Splits (*ratio: 0.9*)
- sentences used for training: `1,687,124`
- sentences used for evaluation: `187,459`
- Total training steps: `210,890`
- Total evaluation steps: `23,433`
## Parameters
- Data
- block_size: `256`
- Training
- per_device_train_batch_size: `8`
- per_device_eval_batch_size: `8`
- gradient_accumulation_steps: `2`
- learning_rate: `5e-5`
- num_train_epochs: `2`
- fp16: `True`
- fp16_opt_level: `'01'`
- seed: `42`
- Output
- global_step: `210890`
- training_loss: `3.5964575726682155`
## Evaluation
- Perplexity: `17.469366079957922`
### Citation
> Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html)
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
``` |
andi611/bert-base-uncased-ner-conll2003 | efdf72fbab7a8c918dca934caeff9d44b425e163 | 2021-07-07T09:31:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | andi611 | null | andi611/bert-base-uncased-ner-conll2003 | 25 | null | transformers | 7,648 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: bert-base-uncased-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.19881805328292054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1258
- Precision: 0.0269
- Recall: 0.1379
- F1: 0.0451
- Accuracy: 0.1988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 2.1296 | 0.0270 | 0.1389 | 0.0452 | 0.1942 |
| No log | 2.0 | 8 | 2.1258 | 0.0269 | 0.1379 | 0.0451 | 0.1988 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
anechaev/ru_med_gpt3sm_based_on_gpt2 | 80126d892080920c38889f36aa333e730efb3361 | 2022-02-08T12:31:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"license:mit"
] | text-generation | false | anechaev | null | anechaev/ru_med_gpt3sm_based_on_gpt2 | 25 | null | transformers | 7,649 | ---
language:
- ru
tags:
- PyTorch
- Transformers
license: mit
---
# Medical History Model based on ruGPT2 by @sberbank-ai
A simple model for helping medical staff to complete patient's medical histories.
Model used pretrained [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2)
|
anirudh21/xlnet-base-cased-finetuned-rte | e0554de5feda27c36f8d688b66c626fd6efccd0e | 2022-01-14T07:04:23.000Z | [
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | anirudh21 | null | anirudh21/xlnet-base-cased-finetuned-rte | 25 | null | transformers | 7,650 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlnet-base-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6895306859205776
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-rte
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0656
- Accuracy: 0.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7007 | 0.4874 |
| No log | 2.0 | 312 | 0.6289 | 0.6751 |
| No log | 3.0 | 468 | 0.7020 | 0.6606 |
| 0.6146 | 4.0 | 624 | 1.0573 | 0.6570 |
| 0.6146 | 5.0 | 780 | 1.0656 | 0.6895 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
anton-l/sew-mid-100k-ft-common-language | f5d6e3838a21f0727728bbab8b8cdbc72b08d9f6 | 2021-10-28T10:52:41.000Z | [
"pytorch",
"tensorboard",
"sew",
"audio-classification",
"dataset:common_language",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | anton-l | null | anton-l/sew-mid-100k-ft-common-language | 25 | null | transformers | 7,651 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: sew-mid-100k-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-ft-common-language
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1189
- Accuracy: 0.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.608 | 1.0 | 173 | 3.7266 | 0.0540 |
| 3.1298 | 2.0 | 346 | 3.2180 | 0.1654 |
| 2.8481 | 3.0 | 519 | 2.9270 | 0.2019 |
| 2.648 | 4.0 | 692 | 2.6991 | 0.2619 |
| 2.5 | 5.0 | 865 | 2.5236 | 0.3004 |
| 2.2578 | 6.0 | 1038 | 2.4019 | 0.3212 |
| 2.2782 | 7.0 | 1211 | 2.1698 | 0.3658 |
| 2.1665 | 8.0 | 1384 | 2.1976 | 0.3631 |
| 2.1626 | 9.0 | 1557 | 2.1473 | 0.3791 |
| 2.1514 | 10.0 | 1730 | 2.1189 | 0.3842 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tner/xlm-roberta-large-uncased-mit-movie-trivia | f76e75819f9aa762b8d99c4b1c90f48e19bf0fc4 | 2021-02-13T00:11:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-large-uncased-mit-movie-trivia | 25 | null | transformers | 7,652 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-movie-trivia")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-movie-trivia")
``` |
astarostap/distilbert-cased-antisemitic-tweets | 19a14e75df21357ef4393db5b91fe029006552e8 | 2021-02-08T15:03:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | astarostap | null | astarostap/distilbert-cased-antisemitic-tweets | 25 | null | transformers | 7,653 | ---
license: mit
widget:
- text: "Jews run the world."
---
This model takes a tweet with the word "jew" in it, and determines if it's antisemitic.
*Training data:*
This model was trained on 4k tweets, where ~50% were labeled as antisemitic.
I labeled them myself based on personal experience and knowledge about common antisemitic tropes.
*Note:*
The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts.
Please keep in mind that I'm not an expert on antisemitism or hatespeech.
Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech.
If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected]
This model is not ready for production, it needs more evaluation and more training data.
|
bertin-project/bertin-base-random-exp-512seqlen | e37776eaf95c689c2741bd1eaf3d878177446c92 | 2021-09-23T13:41:57.000Z | [
"pytorch",
"jax",
"tensorboard",
"joblib",
"roberta",
"fill-mask",
"es",
"transformers",
"spanish",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | bertin-project | null | bertin-project/bertin-base-random-exp-512seqlen | 25 | null | transformers | 7,654 | ---
language: es
license: cc-by-4.0
tags:
- spanish
- roberta
pipeline_tag: fill-mask
widget:
- text: Fui a la librería a comprar un <mask>.
---
This is a **RoBERTa-base** model trained from scratch in Spanish.
The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is random.
This model continued training from [sequence length 128](https://huggingface.co/bertin-project/bertin-base-random) using 20.000 steps for length 512.
Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information.
This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team members
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Paulo Villegas ([paulo](https://huggingface.co/paulo)) |
boris/xlsr-en-punctuation | b84241b8a6b9369e23df651d7377bb9dc9aec475 | 2021-07-05T23:33:26.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | boris | null | boris/xlsr-en-punctuation | 25 | 2 | transformers | 7,655 | ---
language: en
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: English XLSR Wav2Vec2 Large 53 with punctuation
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice en
type: common_voice
args: en
metrics:
- name: Test WER
type: wer
value: 1.0
---
# Wav2Vec2-Large-XLSR-53-English
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
bs-modeling-metadata/html-metadata-exp1-subexp3-1898197 | a6ee6dea3fa030453b4dfe7a0e0c82d96cf99438 | 2021-11-13T09:28:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | bs-modeling-metadata | null | bs-modeling-metadata/html-metadata-exp1-subexp3-1898197 | 25 | 1 | transformers | 7,656 | ---
widget:
- text: " htmlOn ||| <h1"
---
# Work In Progress
# How to use?
To generate text with HTML, the sentence must start with ` htmlOn |||` (note the space at the beginning 😉). To generate normal text, you don't need to add anything.
# Training details
We continued the pre-training of [gpt2](https://huggingface.co/gpt2).
Dataset:[Natural_Questions_HTML_reduced_all](https://huggingface.co/datasets/SaulLu/Natural_Questions_HTML_reduced_all)
50% of the examples in the training data contained `h1`, `h2`, ..., `h6` and `p` HTML tags with only their `id` and `class` attributes. 50% of the examples were just plain text.
Training example with metadata:
```
htmlOn ||| <h1 id:firstHeading class:firstHeading>Market power</h1>
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Competition law
Basic concepts
History of competition law
Monopoly
Coercive monopoly
Natural monopoly
Barriers to entry
Herfindahl–Hirschman Index
Market concentration
Market power
SSNIP test
Relevant market
Merger control
Anti-competitive practices
Monopolization
Collusion
Formation of cartels
Price fixing
Bid rigging
Product bundling and tying
Refusal to deal
Group boycott
Essential facilities
Exclusive dealing
Dividing territories
Conscious parallelism
Predatory pricing
Misuse of patents and copyrights
Enforcement authorities and organizations
International Competition Network
List of competition regulators
v
t
e
<p>In economics and particularly in industrial organization, market power is the ability of a firm to profitably raise the market price of a good or service over marginal cost. In perfectly competitive markets, market participants have no market power. A firm with total market power can raise prices without losing any customers to competitors. Market participants that have market power are therefore sometimes referred to as "price makers" or "price setters", while those without are sometimes called "price takers". Significant market power occurs when prices exceed marginal cost and long run average cost, so the firm makes profit.</p>
<p>A firm with market power has the ability to individually affect either the total quantity or the prevailing price in the market. Price makers face a downward-sloping demand curve, such that price increases lead to a lower quantity demanded. The decrease in supply as a result of the exercise of market power creates an economic deadweight loss which is often viewed as socially undesirable. As a result, many countries have anti-trust or other legislation intended to limit the ability of firms to accrue market power. Such legislation often regulates mergers and sometimes introduces a judicial power to compel divestiture.</p>
<p>A firm usually has market power by virtue of controlling a large portion of the market. In extreme cases—monopoly and monopsony—the firm controls the entire market. However, market size alone is not the only indicator of market power. Highly concentrated markets may be contestable if there are no barriers to entry or exit, limiting the incumbent firm's ability to raise its price above competitive levels.</p>
<p>Market power gives firms the ability to engage in unilateral anti-competitive behavior.[1] Some of the behaviours that firms with market power are accused of engaging in include predatory pricing, product tying, and creation of overcapacity or other barriers to entry. If no individual participant in the market has significant market power, then anti-competitive behavior can take place only through collusion, or the exercise of a group of participants' collective market power.</p>
<p>The Lerner index and Herfindahl index may be used to measure market power.</p>
<p></p><h2>Contents</h2>
[hide]
1 Oligopoly
2 Monopoly power
3 Source
4 Measurement
5 Elasticity of demand
6 Nobel Memorial Prize
7 See also
8 References
9 Further references
<p></p><h2>Oligopoly[edit]</h2>
<p>When several firms control a significant share of market sales, the resulting market structure is called an oligopoly or oligopsony. An oligopoly may engage in collusion, either tacit or overt, and thereby exercise market power. A group of firms that explicitly agree to affect market price or output is called a cartel.</p>
<h2>Monopoly power[edit]</h2>
<p>Monopoly power is an example of market failure which occurs when one or more of the participants has the ability to influence the price or other outcomes in some general or specialized market. The most commonly discussed form of market power is that of a monopoly, but other forms such as monopsony, and more moderate versions of these two extremes, exist.</p>
<p>A well-known example of monopolistic market power is Microsoft's market share in PC operating systems. The United States v. Microsoft case dealt with an allegation that Microsoft illegally exercised its market power by bundling its web browser with its operating system. In this respect, the notion of dominance and dominant position in EU Antitrust Law is a strictly related aspect.[2]</p>
<h2>Source[edit]</h2>
<p>A monopoly can raise prices and retain customers because the monopoly has no competitors. If a customer has no other place to go to obtain the goods or services, they either pay the increased price or do without.[3] Thus the key to market power is to preclude competition through high barriers of entry. Barriers to entry that are significant sources
```
|
cahya/wav2vec2-base-turkish-artificial-cv | 24eecc3f4587cc456728e2480cd7f469f16c2dda | 2022-02-01T19:34:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-base-turkish-artificial-cv | 25 | 2 | transformers | 7,657 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 13.70
---
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 13.70 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cardiffnlp/twitter-roberta-base-stance-hillary | 62c5ae9d789d4de334e5b3f1b8b85d152dfaacde | 2021-05-20T15:12:15.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-stance-hillary | 25 | null | transformers | 7,658 | |
csarron/ViLT-VQAv2 | 87370f049b2fcc42db1b6f93b560020dde1cb4f6 | 2021-12-16T20:00:34.000Z | [
"pytorch",
"vilt",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | csarron | null | csarron/ViLT-VQAv2 | 25 | null | transformers | 7,659 | Entry not found |
digit82/dialog-sbert-base | cd2aff78addcb9283c865919ca82941769f35b46 | 2021-10-15T08:46:04.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | digit82 | null | digit82/dialog-sbert-base | 25 | null | transformers | 7,660 | Entry not found |
federicopascual/finetuning-sentiment-model-3000-samples-testcopy | fa24172eb4eed19b4732ed61e3ad6ca2952d85a9 | 2022-01-04T14:34:49.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | federicopascual | null | federicopascual/finetuning-sentiment-model-3000-samples-testcopy | 25 | 1 | transformers | 7,661 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-testcopy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8761904761904761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-testcopy
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3374
- Accuracy: 0.87
- F1: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ghadeermobasher/BC5CDR-Chemical-Disease-balancedBioM-ELECTRA-Base-Discriminator | eefc00c6370aa09e516e7f3144594e5d66c04483 | 2022-01-22T23:14:13.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chemical-Disease-balancedBioM-ELECTRA-Base-Discriminator | 25 | null | transformers | 7,662 | Entry not found |
groar/gpt-neo-1.3B-finetuned-escape2 | 1955cc861abf3eeef407905200da2db228110e38 | 2022-02-13T20:59:30.000Z | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | groar | null | groar/gpt-neo-1.3B-finetuned-escape2 | 25 | null | transformers | 7,663 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape2
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/angularocean | 52d038360c268fe51c60023263011ee756ac21f8 | 2021-05-21T19:01:01.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/angularocean | 25 | null | transformers | 7,664 | ---
language: en
thumbnail: https://www.huggingtweets.com/angularocean/1616713094074/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1220764691829608448/QWMxSgNV_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Angle of Ocean 🤖 AI Bot </div>
<div style="font-size: 15px">@angularocean bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@angularocean's tweets](https://twitter.com/angularocean).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2933 |
| Retweets | 843 |
| Short tweets | 430 |
| Tweets kept | 1660 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1q9wm9nt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angularocean's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1fr77sf3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1fr77sf3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angularocean')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/degrassinocontx | f1f7b87d747fd7ecc66f4fd24c324e0c390fa060 | 2021-05-22T01:10:02.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/degrassinocontx | 25 | null | transformers | 7,665 | ---
language: en
thumbnail: https://www.huggingtweets.com/degrassinocontx/1614122429501/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1361151177455468548/mGKDi3dV_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Degrassi No Context 🤖 AI Bot </div>
<div style="font-size: 15px">@degrassinocontx bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@degrassinocontx's tweets](https://twitter.com/degrassinocontx).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 54 |
| Short tweets | 1504 |
| Tweets kept | 1687 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mu201mzi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @degrassinocontx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wxznhll) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wxznhll/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/degrassinocontx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/downgrad3d | bcef1e3105d49f5bea9689944611e555d002d1d5 | 2021-05-22T02:08:56.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/downgrad3d | 25 | null | transformers | 7,666 | ---
language: en
thumbnail: https://www.huggingtweets.com/downgrad3d/1614303163871/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1363217665586835460/RU5F44Dj_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">daniel 🤖 AI Bot </div>
<div style="font-size: 15px">@downgrad3d bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@downgrad3d's tweets](https://twitter.com/downgrad3d).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 441 |
| Retweets | 138 |
| Short tweets | 82 |
| Tweets kept | 221 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6eqzlox6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @downgrad3d's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1fsmvsit) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1fsmvsit/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/downgrad3d')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/gwenvara_ | 8dda964ce1cfd74b1e023b3b0ae9c6c3b3dffd80 | 2021-05-22T06:23:47.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gwenvara_ | 25 | null | transformers | 7,667 | ---
language: en
thumbnail: https://www.huggingtweets.com/gwenvara_/1616736053941/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364599664016691206/NVK2fuwS_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Anarcho-Gwendolism 🧬 🤖 AI Bot </div>
<div style="font-size: 15px">@gwenvara_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gwenvara_'s tweets](https://twitter.com/gwenvara_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3069 |
| Retweets | 1831 |
| Short tweets | 350 |
| Tweets kept | 888 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/p9ao8jnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gwenvara_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/l9zed4di) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/l9zed4di/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gwenvara_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mariobrothblog | 4c8d81bbb5f035967ecedd306eb3eca89ad122de | 2021-05-22T13:22:25.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mariobrothblog | 25 | null | transformers | 7,668 | ---
language: en
thumbnail: https://www.huggingtweets.com/mariobrothblog/1614433919886/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/882866421822361601/IDcw7Vqa_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Supper Mario Broth 🤖 AI Bot </div>
<div style="font-size: 15px">@mariobrothblog bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mariobrothblog's tweets](https://twitter.com/mariobrothblog).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2840 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 2840 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19c6osvl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mariobrothblog's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/327sfx1g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/327sfx1g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mariobrothblog')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/markiplier | b3154d1ba522b085d41b445973ac6a84c74f3f6d | 2022-06-07T22:46:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/markiplier | 25 | null | transformers | 7,669 | ---
language: en
thumbnail: http://www.huggingtweets.com/markiplier/1654641978193/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1511102924310544387/j6E29xq6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mark</div>
<div style="text-align: center; font-size: 14px;">@markiplier</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mark.
| Data | Mark |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 304 |
| Short tweets | 388 |
| Tweets kept | 2538 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k0vje7m/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markiplier's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6mne3h2w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6mne3h2w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/markiplier')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonatasgrosman/wav2vec2-large-xlsr-53-finnish | 004a7893e332cc1fa9aa66d2c089ce6b2fd73365 | 2022-07-27T23:35:08.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-large-xlsr-53-finnish | 25 | 1 | transformers | 7,670 | ---
language: fi
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 41.60
- name: Test CER
type: cer
value: 8.23
---
# Fine-tuned XLSR-53 large model for speech recognition in Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-finnish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fi"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-finnish"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| MYSTEERIMIES OLI OPPINUT MORAALINSA TARUISTA, ELOKUVISTA JA PELEISTÄ. | MYSTEERIMIES OLI OPPINUT MORALINSA TARUISTA ELOKUVISTA JA PELEISTÄ |
| ÄÄNESTIN MIETINNÖN PUOLESTA! | ÄÄNESTIN MIETINNÖN PUOLESTA |
| VAIN TUNTIA AIKAISEMMIN OLIMME MIEHENI KANSSA TUNTENEET SUURINTA ILOA. | PAIN TUNTIA AIKAISEMMIN OLIN MIEHENI KANSSA TUNTENEET SUURINTA ILAA |
| ENSIMMÄISELLE MIEHELLE SAI KOLME LASTA. | ENSIMMÄISELLE MIEHELLE SAI KOLME LASTA |
| ÄÄNESTIN MIETINNÖN PUOLESTA, SILLÄ POHJIMMILTAAN SIINÄ VASTUSTETAAN TÄTÄ SUUNTAUSTA. | ÄÄNESTIN MIETINNÖN PUOLESTA SILLÄ POHJIMMILTAAN SIINÄ VASTOTTETAAN TÄTÄ SUUNTAUSTA |
| TÄHDENLENTOJENKO VARALTA MINÄ SEN OLISIN TÄNNE KUSKANNUT? | TÄHDEN LENTOJENKO VARALTA MINÄ SEN OLISIN TÄNNE KUSKANNUT |
| SIITÄ SE TULEE. | SIITA SE TULEE |
| NIIN, KUULUU KIROUS, JA KAUHEA KARJAISU. | NIIN KUULUU KIROUS JA KAUHEA KARJAISU |
| ARKIT KUN OVAT NÄES ELEMENTTIRAKENTEISIA. | ARKIT KUN OVAT MÄISS' ELÄMÄTTEROKENTEISIÄ |
| JÄIN ALUKSEN SISÄÄN, MUTTA KUULIN OVEN LÄPI, ETTÄ ULKOPUOLELLA ALKOI TAPAHTUA. | JAKALOKSEHÄN SISÄL MUTTA KUULIN OVENLAPI ETTÄ ULKA KUOLLALLA ALKOI TAPAHTUA |
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fi"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-finnish"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-21). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| aapot/wav2vec2-large-xlsr-53-finnish | **32.51%** | **5.34%** |
| Tommi/wav2vec2-large-xlsr-53-finnish | 35.22% | 5.81% |
| vasilis/wav2vec2-large-xlsr-53-finnish | 38.24% | 6.49% |
| jonatasgrosman/wav2vec2-large-xlsr-53-finnish | 41.60% | 8.23% |
| birgermoell/wav2vec2-large-xlsr-finnish | 53.51% | 9.18% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-finnish,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}innish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-finnish}},
year={2021}
}
```
|
l3cube-pune/marathi-roberta | 2cf9635825613fcb8f9998d729be457c069a7a3d | 2022-06-26T15:13:30.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | fill-mask | false | l3cube-pune | null | l3cube-pune/marathi-roberta | 25 | null | transformers | 7,671 | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaRoBERTa
MahaRoBERTa is a Marathi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
``` |
maelfabien/marcel_customer_service | 0a565dec20552e2d362360977d62cc31f04fdcdc | 2021-04-13T15:43:17.000Z | [
"pytorch",
"camembert",
"text-generation",
"transformers"
] | text-generation | false | maelfabien | null | maelfabien/marcel_customer_service | 25 | null | transformers | 7,672 | Entry not found |
manishiitg/longformer-recruit-qa-v2 | 500bf274767bed2914a3134f887283dc08c1caa9 | 2020-11-11T12:52:27.000Z | [
"pytorch",
"longformer",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | manishiitg | null | manishiitg/longformer-recruit-qa-v2 | 25 | null | transformers | 7,673 | Entry not found |
monsoon-nlp/sanaa-dialect | fb62a0548871ee77757970c04c3397b89eaa62d5 | 2021-05-23T10:06:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"ar",
"transformers"
] | text-generation | false | monsoon-nlp | null | monsoon-nlp/sanaa-dialect | 25 | null | transformers | 7,674 | ---
language: ar
---
# Sanaa-Dialect
## Finetuned Arabic GPT-2 demo
This is a small GPT-2 model, originally trained on Arabic Wikipedia circa September 2020 ,
finetuned on dialect datasets from Qatar University, University of British Columbia / NLP,
and Johns Hopkins University / LREC
- https://qspace.qu.edu.qa/handle/10576/15265
- https://github.com/UBC-NLP/aoc_id
- https://github.com/ryancotterell/arabic_dialect_annotation
You can use special tokens to prompt five dialects: `[EGYPTIAN]`, `[GULF]`, `[LEVANTINE]`, `[MAGHREBI]`, and `[MSA]`
```
from simpletransformers.language_generation import LanguageGenerationModel
model = LanguageGenerationModel("gpt2", "monsoon-nlp/sanaa-dialect")
model.generate('[GULF]' + "مدينتي هي", { 'max_length': 100 })
```
There is NO content filtering in the current version; do not use for public-facing
text generation!
## Training and Finetuning details
Original model and training: https://huggingface.co/monsoon-nlp/sanaa
I inserted new tokens into the tokenizer, finetuned the model on the dialect samples, and exported the new model.
Notebook: https://colab.research.google.com/drive/1fXFH7g4nfbxBo42icI4ZMy-0TAGAxc2i
شكرا لتجربة هذا! ارجو التواصل معي مع الاسئلة
|
mrm8488/t5-small-finetuned-imdb-sentiment | 366602ca7b51fbd343f18fbd788ded1f1f910f8e | 2021-06-23T13:08:40.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:imdb",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-imdb-sentiment | 25 | null | transformers | 7,675 | ---
language: en
datasets:
- imdb
---
# T5-small fine-tuned for Sentiment Anlalysis 🎞️👍👎
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [IMDB](https://huggingface.co/datasets/imdb) dataset for **Sentiment Analysis** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Sentiment analysis) - Dataset 📚
[IMDB](https://huggingface.co/datasets/imdb)
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. It provides a set of **25,000** highly polar movie reviews for training, and **25,000** for testing.
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
## Test set metrics 🧾
| |precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 0.92 | 0.93| 0.92| 12500|
|positive | 0.93 | 0.92| 0.92| 12500|
|----------|----------|---------|----------|-------|
|accuracy| | | 0.92| 25000|
|macro avg| 0.92| 0.92| 0.92| 25000|
|weighted avg| 0.92| 0.92| 0.92| 25000|
## Model in Action 🚀
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment")
def get_sentiment(text):
input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')
output = model.generate(input_ids=input_ids,
max_length=2)
dec = [tokenizer.decode(ids) for ids in output]
label = dec[0]
return label
get_sentiment("I dislike a lot that film")
# Output: 'negative'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
napoler/bart-chinese-6-960-words-pkuseg | e306b3eff430af89ec1389496885ba27e7b78280 | 2021-10-25T15:05:51.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | napoler | null | napoler/bart-chinese-6-960-words-pkuseg | 25 | null | transformers | 7,676 | # 使用
这个模型是在uer/bart-chinese-6-960-cluecorpussmall基础上训练的,数据量不是很大,但是修改了默认分词。
使用pkuseg分词,禁用BertTokenizer的do_basic_tokenize分词,不禁用do_basic_tokenize的话会把正常词汇按照逐字分词,禁用后可以导入自己的分词方案。
pip install git+https://github.com/napoler/tkit-AutoTokenizerPosition
```python
import pkuseg
from tkitAutoTokenizerPosition.AutoPos import AutoPos
seg = pkuseg.pkuseg(model_name='medicine') # 程序会自动下载所对应的细领域模型
tokenizer = BertTokenizer.from_pretrained("uer/chinese_roberta_L-2_H-128",do_basic_tokenize=False)
ATP=AutoPos(seg,tokenizer)
# 清理文本中的问题
ATP.getTokenize(text)
```
分词结果如下
```
['他', '##们', '的', '伤', '##害', ',', '以', '##及', '陷', '##阱', '能', '##力', '的', '组', '##合', ',', '猎', '##人', '对', '##于', '任', '##何', '团', '##队', '都', '是', '最', '##好', '的', '拉', '##怪', '##者', '.'], 'cut': ['他们', '的', '伤害', ',', '以及', '陷阱', '能力', '的', '组合', ',', '猎人', '对于', '任何', '团队', '都', '是', '最好', '的', '拉怪者', '.']
```
https://www.kaggle.com/terrychanorg/napolerbartchinese6960wordspkuseg
https://www.kaggle.com/terrychanorg/buliddataforbert-7803feff2
https://www.kaggle.com/terrychanorg/bart-notebook8wewew6eeb0f8af
https://www.kaggle.com/terrychanorg/fork-of-bart-notebook8wewew6eeb0f8af/data?scriptVersionId=77962540
|
ozcangundes/mt5-small-turkish-squad | b22d4c440211d4f17c3a6efe15d82689ce88fae9 | 2021-09-22T09:31:24.000Z | [
"pytorch",
"jax",
"mt5",
"text2text-generation",
"tr",
"dataset:TQUAD",
"transformers",
"license:mit",
"question-answering",
"autotrain_compatible"
] | question-answering | false | ozcangundes | null | ozcangundes/mt5-small-turkish-squad | 25 | null | transformers | 7,677 | ---
language: tr
datasets:
- TQUAD
pipeline_tag: question-answering
license: mit
---
# mT5-small based Turkish Question Answering System
[Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/TQuad/turkish-nlp-qa-dataset) for **Q&A** downstream task by using Pytorch Lightning.⚡
The notebook that includes all fine tuning process will be shared on my Github page later. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it.
**Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual)
excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task.
## Usage 🚀
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-squad")
model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-squad")
def get_answer(question,context):
source_encoding=tokenizer(
question,
context,
max_length=512,
padding="max_length",
truncation="only_second",
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt")
generated_ids=model.generate(
input_ids=source_encoding["input_ids"],
attention_mask=source_encoding["attention_mask"],
max_length=120)
preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids]
return "".join(preds)
```
### Example 1
```python
question={
"context":"Pardus, Google'ın öğrencilerle staj ve kendini geliştirme imkânı ile \
tasarılara geliştirici ve katkı sağlamayı amaçladığı açık kaynak tasarısı \
Google Summer of Code'a 2008 ve 2009 olmak üzere iki kere katılmıştır. Bu organizasyona \
ilk katılan Türk tasarısı Pardus olmuştur. Bazı dönemlerde Pardus hakkındaki gelişmeleri \
halka duyurmak ve tasarıya olan ilgiyi arttırmak amacıyla CeBIT Eurasia Bilişim Fuarı'na \
katılım sağlanmaktadır. 2006, 2008, 2009, 2010, 2011,2013 ve 2014 bu fuarlarda Pardus \
standı kurulmuştur.2014 yılında ICT SummitT Now Bilişim Zirvesi'nde yer alınmıştır. \
BİLİŞİM’2014 TBD 31. Ulusal Bilişim Kurultayı ve CITEX’2014 Ankara Bilişim Fuarı’na \
Gümüş sponsorluk ile katkıda bulunulmuş ve Pardus standı kurulmuştur.",
"question":"Pardus’un Google Summer of Code'a katıldığı yıllar nelerdir?"
}
get_answer(question["question"],question["context"])
```
> 2008 ve 2009
### Example 2
```python
question2={
"context":"II. Bayezid ve I. Selim devrinde yaşadı ve iki defa hekimbaşılık yaptı. \
Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği \
eseriyle tanınır. Adı kaynaklarda Ahmed ve Mahmud olarak da geçer. Ahi Çelebi \
olarak ün yapmıştır. Babası Tabib Mevlana Kemal ile birlikte 1463’te İstanbul’a yerleşti. \
Mevlana Kemal, devrin ünlü hekimlerindendir. Tebriz ya da Şirvan asıllı olduğu çeşitli \
kaynaklarda belirtilir. Ahi Mehmet Çelebi, hekimliği daha çok babasından öğrendi. Onun \
ölümünden sonra devrin önemli hekimleri Kutbüddin ile Altunîzâde’den ders alıp kısa zamanda \
mesleğini ilerletti. Hekimlik becerisinin yanı sıra kuramsal bilgisiyle de kendisini \
kabul ettirerek önce Fâtih Darüşşifasına hekim, sonra da başhekim oldu. II. Bayezid’in \
güvenini kazanarak mutfak eminliğine, ardından da Hekimbaşılığa getirildi. Dört buçuk \
yıl bu görevde kalan Ahî Çelebi, II. Bayezid’in ölümü üzerine geleneğe uyularak azledildi. \
Bir müddet sonra Yavuz onu tekrar Hekimbaşılığa getirdi ve Mısır seferine beraberinde \
götürdü. I. Selim'in ölümünden sonra Hekimbaşılık tan tekrar azledildi. Kaynakların \
belirttiğine göre, yaşı doksanı geçmiş olduğu halde, hacdan dönerken Kahire’de \
ölmüş ve İmam Şafi'nin kabri civarına defnedilmiştir.",
"question":"Ahi Mehmet Çelebi hangi eseri ile tanınır?"
}
get_answer(question2["question"],question2["context"])
```
> Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği eseriyle
Created by Özcan Gündeş ✌️
---
Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a>
Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a>
Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a>
Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
|
p208p2002/qmst-qgg | 75231044843f44320bfec01ce383e0552f4a1642 | 2022-01-10T07:55:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | p208p2002 | null | p208p2002/qmst-qgg | 25 | null | transformers | 7,678 | # EQGG: Educational Question Group Generation
<span>
<a target="_blank" href="https://github.com/p208p2002/Neural-Question-Group-Generation">
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
</a>
<a target="_blank" href="https://huggingface.co/p208p2002/qmst-qgg">
<img src="https://img.shields.io/badge/🤗 HF Model Hub-ffea00?style=for-the-badge&logoColor=white">
</a>
<a target="_blank" href="https://qgg-demo.nlpnchu.org">
<img src="https://img.shields.io/badge/💻 Live Demo-78ab78?style=for-the-badge&logoColor=white">
</a>
</span>
|
pere/DeUnCaser | 29469c0d8a800c8c322d3c4a6f84a31fbe1347fb | 2022-02-03T10:45:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"no",
"transformers",
"translation",
"license:cc-by-4.0",
"autotrain_compatible"
] | translation | false | pere | null | pere/DeUnCaser | 25 | null | transformers | 7,679 | ---
language: no
tags:
- translation
widget:
- text: "moscow says deployments in eastern europe increase tensions nato says russia has moved troops to belarus"
- text: "dette er en liten test som er laget av per egil kummervold han er en forsker som tidligere jobbet ved nasjonalbiblioteket"
- text: "tirsdag var travel for ukrainas president volodymyr zelenskyj på morgenen tok han imot polens statsminister mateusz morawiecki"
license: cc-by-4.0
---
# DeUnCaser
The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text.
The DeUnCaser is a sequence-to-sequence byT5 model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer.
It is using based on the multi-lingual base model. However the current finetuning is only done on Norwegian. For other languages this will be mainly experimental. I will update it with support for other languages if there is any demand.
|
pszemraj/led-base-16384-finetuned-booksum | c8c7c9fe460614290baa65968d78c34b3a383d8d | 2022-02-06T03:14:01.000Z | [
"pytorch",
"led",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"arxiv:2105.08209",
"transformers",
"summarization",
"summary",
"longformer",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/led-base-16384-finetuned-booksum | 25 | null | transformers | 7,680 | ---
language:
- en
tags:
- summarization
- led
- summary
- longformer
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: "large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock."
example_title: "earthquakes"
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a 'toolbox' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5)."
example_title: "scientific paper"
- text: " the big variety of data coming from diverse sources is one of the key properties of the big data phenomenon. It is, therefore, beneficial to understand how data is generated in various environments and scenarios, before looking at what should be done with this data and how to design the best possible architecture to accomplish this The evolution of IT architectures, described in Chapter 2, means that the data is no longer processed by a few big monolith systems, but rather by a group of services In parallel to the processing layer, the underlying data storage has also changed and became more distributed This, in turn, required a significant paradigm shift as the traditional approach to transactions (ACID) could no longer be supported. On top of this, cloud computing is becoming a major approach with the benefits of reducing costs and providing on-demand scalability but at the same time introducing concerns about privacy, data ownership, etc In the meantime the Internet continues its exponential growth: Every day both structured and unstructured data is published and available for processing: To achieve competitive advantage companies have to relate their corporate resources to external services, e.g. financial markets, weather forecasts, social media, etc While several of the sites provide some sort of API to access the data in a more orderly fashion; countless sources require advanced web mining and Natural Language Processing (NLP) processing techniques: Advances in science push researchers to construct new instruments for observing the universe O conducting experiments to understand even better the laws of physics and other domains. Every year humans have at their disposal new telescopes, space probes, particle accelerators, etc These instruments generate huge streams of data, which need to be stored and analyzed. The constant drive for efficiency in the industry motivates the introduction of new automation techniques and process optimization: This could not be done without analyzing the precise data that describe these processes. As more and more human tasks are automated, machines provide rich data sets, which can be analyzed in real-time to drive efficiency to new levels. Finally, it is now evident that the growth of the Internet of Things is becoming a major source of data. More and more of the devices are equipped with significant computational power and can generate a continuous data stream from their sensors. In the subsequent sections of this chapter, we will look at the domains described above to see what they generate in terms of data sets. We will compare the volumes but will also look at what is characteristic and important from their respective points of view. 3.1 The Internet is undoubtedly the largest database ever created by humans. While several well described; cleaned, and structured data sets have been made available through this medium, most of the resources are of an ambiguous, unstructured, incomplete or even erroneous nature. Still, several examples in the areas such as opinion mining, social media analysis, e-governance, etc, clearly show the potential lying in these resources. Those who can successfully mine and interpret the Internet data can gain unique insight and competitive advantage in their business An important area of data analytics on the edge of corporate IT and the Internet is Web Analytics."
example_title: "data science textbook"
- text: "Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n^2)O(n
2) time & memory complexity (where nn is sequence length). Hence, it's computationally very expensive to apply transformer-based models on long sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention try to remedy this problem by approximating the full attention matrix. You can checkout 🤗's recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue. BigBird relies on block sparse attention instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this post is to give the reader an in-depth understanding of big bird implementation & ease one's life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the BigBird's attention is an approximation of BERT's full attention and therefore does not strive to be better than BERT's full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT's quadratic memory requirement quickly becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT's attention would be preferred over block sparse attention (which we are going to discuss in this post).
If you wonder why we need more compute when working with longer sequences, this blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like attention include:
Do all tokens really have to attend to all other tokens?
Why not compute attention only over important tokens?
How to decide what tokens are important?
How to attend to just a few tokens in a very efficient way?
In this blog post, we will try to answer those questions.
What tokens should be attended to?
We will give a practical example of how attention works by considering the sentence 'BigBird is now available in HuggingFace for extractive question answering'. In BERT-like attention, every word would simply attend to all other tokens.
Let's think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code. Will will assume that the token available is queried and build a sensible list of key tokens to attend to.
>>> # let's consider following sentence as an example
>>> example = ['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']
>>> # further let's assume, we're trying to understand the representation of 'available' i.e.
>>> query_token = 'available'
>>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently 'available' token doesn't have anything to attend
Nearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of sliding attention."
example_title: "bigbird blog intro"
inference:
parameters:
max_length: 64
min_length: 4
no_repeat_ngram_size: 2
early_stopping: True
repetition_penalty: 2.4
length_penalty: 0.5
encoder_no_repeat_ngram_size : 3
num_beams : 4
---
# Longformer Encoder-Decoder (LED) fine-tuned on Booksum
- `allenai/led-base-16384` checkpoint trained on the [booksum dataset](https://arxiv.org/abs/2105.08209) for 3 epochs.
- handles summarization a-la "school notes" style well, but takes a while to run (even compared to larger models such as a [bigbird-pegasus](https://huggingface.co/pszemraj/bigbird-pegasus-large-booksum-40k-K) checkpoint on the same data.
- upside: works well on lots of text, can hand 16384 tokens/batch.
- an example usage notebook is [here](https://colab.research.google.com/gist/pszemraj/da8872e702ea9e3d74c39a236e89104b/led-base-booksum-example.ipynb) with details
## Other Checkpoints on Booksum
- A one-epoch version of [LED-large is available here](https://huggingface.co/pszemraj/led-large-book-summary-1E) - a more polished version still WIP.
---
# Usage - Basics
- from testing, it is highly recommended to use the parameter `encoder_no_repeat_ngram_size=3` when calling the pipeline object.
- this forces the model to use new vocabulary and create an abstractive summary, as at times it will compile the best _extractive_ summary from the input provided.
- create the pipeline object:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import pipeline
hf_name = 'pszemraj/led-base-16384-finetuned-booksum'
_model = AutoModelForSeq2SeqLM.from_pretrained(
hf_name,
low_cpu_mem_usage=True,
)
_tokenizer = AutoTokenizer.from_pretrained(
hf_name
)
summarizer = pipeline(
"summarization",
model=_model,
tokenizer=_tokenizer
)
```
- put words into the pipeline object:
```
wall_of_text = "your words here"
result = summarizer(
wall_of_text,
min_length=16,
max_length=256,
no_repeat_ngram_size=3,
encoder_no_repeat_ngram_size =3,
clean_up_tokenization_spaces=True,
repetition_penalty=3.7,
num_beams=4,
early_stopping=True,
)
```
---
# Results
- evaluation was completed with the following params and received the following score
- params:
```
# set generate hyperparameters
model.config.num_beams = 5
model.config.max_length = 512
model.config.min_length = 32
model.config.length_penalty = 3.5
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
trainer.evaluate(num_beams=5, max_length=128)
```
- scores (on 1/10 validation for RT reasons):
```
{'eval_loss': 2.899840831756592,
'eval_rouge1': 30.0761,
'eval_rouge2': 6.4964,
'eval_rougeL': 15.9819,
'eval_rougeLsum': 28.2764,
'eval_gen_len': 126.8514,
'eval_runtime': 1442.991,
'eval_samples_per_second': 0.103,
'eval_steps_per_second': 0.103
}
```
--- |
saattrupdan/employment-contract-ner-da | 6f29e2525cf4c3521fe0c727d5778f62e1ec48d7 | 2022-02-09T15:21:34.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"da",
"transformers",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | saattrupdan | null | saattrupdan/employment-contract-ner-da | 25 | 1 | transformers | 7,681 | ---
language:
- da
license: mit
model-index:
- name: contract-ner-model-da
results: []
widget:
- "Medarbejderen starter arbejdet den 1. januar 2020 og afslutter arbejdet den 21. januar 2020. Den ugentlige arbejdstid er 37 timer, og medarbejderen bliver aflønnet med 23.000,00 kr. om måneden. Arbejdsstedet er Supervej 21, 2000 Frederiksberg."
inference:
parameters:
aggregation_strategy: "first"
---
# contract-ner-model-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a custom contracts dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Micro F1: 0.9297
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 919
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8971 | 0.24 | 200 | 0.0205 | 0.0 |
| 0.0173 | 0.48 | 400 | 0.0100 | 0.2921 |
| 0.0092 | 0.73 | 600 | 0.0065 | 0.7147 |
| 0.0063 | 0.97 | 800 | 0.0046 | 0.8332 |
| 0.0047 | 1.21 | 1000 | 0.0047 | 0.8459 |
| 0.0042 | 1.45 | 1200 | 0.0039 | 0.8694 |
| 0.0037 | 1.69 | 1400 | 0.0035 | 0.8888 |
| 0.0032 | 1.93 | 1600 | 0.0035 | 0.8840 |
| 0.0025 | 2.18 | 1800 | 0.0029 | 0.8943 |
| 0.0023 | 2.42 | 2000 | 0.0024 | 0.9104 |
| 0.0023 | 2.66 | 2200 | 0.0032 | 0.8808 |
| 0.0021 | 2.9 | 2400 | 0.0022 | 0.9338 |
| 0.0018 | 3.14 | 2600 | 0.0020 | 0.9315 |
| 0.0015 | 3.39 | 2800 | 0.0026 | 0.9297 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3 |
soikit/chinese-bert-wwm-chinese_bert_wwm1 | b84cc2694eb2f5de14eba2c59435736e59e77397 | 2021-10-20T12:51:19.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | soikit | null | soikit/chinese-bert-wwm-chinese_bert_wwm1 | 25 | null | transformers | 7,682 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: chinese-bert-wwm-chinese_bert_wwm1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-chinese_bert_wwm1
This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 71 | 0.5750 |
| No log | 2.0 | 142 | 0.0617 |
| No log | 3.0 | 213 | 0.0109 |
| No log | 4.0 | 284 | 0.0042 |
| No log | 5.0 | 355 | 0.0024 |
| No log | 6.0 | 426 | 0.0017 |
| No log | 7.0 | 497 | 0.0012 |
| 0.5341 | 8.0 | 568 | 0.0009 |
| 0.5341 | 9.0 | 639 | 0.0009 |
| 0.5341 | 10.0 | 710 | 0.0011 |
| 0.5341 | 11.0 | 781 | 0.0013 |
| 0.5341 | 12.0 | 852 | 0.0012 |
| 0.5341 | 13.0 | 923 | 0.0010 |
| 0.5341 | 14.0 | 994 | 0.0010 |
| 0.0041 | 15.0 | 1065 | 0.0011 |
| 0.0041 | 16.0 | 1136 | 0.0009 |
| 0.0041 | 17.0 | 1207 | 0.0008 |
| 0.0041 | 18.0 | 1278 | 0.0009 |
| 0.0041 | 19.0 | 1349 | 0.0008 |
| 0.0041 | 20.0 | 1420 | 0.0008 |
| 0.0041 | 21.0 | 1491 | 0.0009 |
| 0.0019 | 22.0 | 1562 | 0.0009 |
| 0.0019 | 23.0 | 1633 | 0.0010 |
| 0.0019 | 24.0 | 1704 | 0.0009 |
| 0.0019 | 25.0 | 1775 | 0.0009 |
| 0.0019 | 26.0 | 1846 | 0.0008 |
| 0.0019 | 27.0 | 1917 | 0.0009 |
| 0.0019 | 28.0 | 1988 | 0.0009 |
| 0.0013 | 29.0 | 2059 | 0.0009 |
| 0.0013 | 30.0 | 2130 | 0.0009 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
sultan/BioM-ELECTRA-Base-SQuAD2 | d94a78b570bf961f9500d7d8785689544fa6cfa7 | 2021-08-06T22:31:58.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | sultan | null | sultan/BioM-ELECTRA-Base-SQuAD2 | 25 | null | transformers | 7,683 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
We fine-tuned BioM-ELECTRA-Base, which was pre-trained on PubMed Abstracts, on the SQuAD2.0 dataset. Fine-tuning the biomedical language model on the SQuAD dataset helps improve the score on the BioASQ challenge. If you plan to work with BioASQ or biomedical QA tasks, it's better to use this model over BioM-ELECTRA-Base.
Huggingface library doesn't implement Layer-Wise decay feature, which affects the performance on SQuAD task. The reported result of BioM-ELECTRA-Base-SQuAD in our paper is 84.4 (F1) since we use ELECTRA open-source code with TF checkpoint, which uses Layer-Wise decay. You can downoad our TensorFlow checkpoint that was fine-tuned on SQuAD2.0 and achieved 84.4 F1 score from here https://github.com/salrowili/BioM-Transformers .
Evaluation results on SQuAD2.0 Dev Dataset
```
eval_HasAns_exact = 79.2679
eval_HasAns_f1 = 86.5416
eval_HasAns_total = 5928
eval_NoAns_exact = 75.8789
eval_NoAns_f1 = 75.8789
eval_NoAns_total = 5945
eval_best_exact = 77.571
eval_best_exact_thresh = 0.0
eval_best_f1 = 81.2026
eval_best_f1_thresh = 0.0
eval_exact = 77.571
eval_f1 = 81.2026
eval_samples = 11979
eval_total = 11873
```
- First make sure to install all libraries on Google Colab and make sure GPU is enabled
```python
!git clone https://github.com/huggingface/transformers
!pip3 install -e transformers
!pip3 install sentencepiece
!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt
```
- Training script
```python
python3 transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Base-Discriminator \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--dataloader_num_workers 20 \
--preprocessing_num_workers 20 \
--version_2_with_negative \
--num_train_epochs 3 \
--learning_rate 4e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 3 \
--per_device_eval_batch_size 128 \
--fp16 \
--fp16_opt_level O1 \
--logging_steps 50 \
--save_steps 5000 \
--overwrite_output_dir \
--output_dir out
```
- Reproduce results without training ( only eval):
```python
python transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path sultan/BioM-ELECTRA-Base-SQuAD2 \
--do_eval \
--version_2_with_negative \
--per_device_eval_batch_size 8 \
--dataset_name squad_v2 \
--overwrite_output_dir \
--fp16 \
--output_dir out
```
- You don't need to download the SQuAD2 dataset. The code will download it from the HuggingFace datasets hub.
- Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
textattack/bert-base-cased-STS-B | 775657b25867bee0a475785f99005b71a2ad2246 | 2021-05-20T07:30:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/bert-base-cased-STS-B | 25 | null | transformers | 7,684 | ## TextAttack Model Card
This `bert-base-cased` model was fine-tuned for sequence classificationusing TextAttack
and the glue dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 128, a learning
rate of 1e-05, and a maximum sequence length of 128.
Since this was a regression task, the model was trained with a mean squared error loss function.
The best score the model achieved on this task was 0.8244429996636282, as measured by the
eval set pearson correlation, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
thefryingpan/gpt-neo-125M-splishy | 046e30492a05679a149bfc796ead28c6eddc6cce | 2021-12-15T03:39:26.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"conversational"
] | conversational | false | thefryingpan | null | thefryingpan/gpt-neo-125M-splishy | 25 | null | transformers | 7,685 | ---
tags:
- conversational
---
# Chat Boi |
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3 | c079cae8ffa225f9c5ef1af17edba4324420c239 | 2022-02-03T15:57:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"es",
"robust-speech-event",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | tomascufaro | null | tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3 | 25 | null | transformers | 7,686 | ---
tags:
- "es"
- "robust-speech-event"
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small-v3
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Wer: 0.1980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2372 | 0.26 | 400 | 0.3011 | 0.2660 |
| 0.3413 | 0.53 | 800 | 0.3559 | 0.3228 |
| 0.3598 | 0.79 | 1200 | 0.3753 | 0.3400 |
| 0.3529 | 1.05 | 1600 | 0.3385 | 0.2979 |
| 0.3133 | 1.32 | 2000 | 0.3559 | 0.3056 |
| 0.3158 | 1.58 | 2400 | 0.3364 | 0.2994 |
| 0.3092 | 1.85 | 2800 | 0.3210 | 0.2876 |
| 0.2919 | 2.11 | 3200 | 0.3460 | 0.3010 |
| 0.2666 | 2.37 | 3600 | 0.3543 | 0.3036 |
| 0.2819 | 2.64 | 4000 | 0.3477 | 0.2959 |
| 0.283 | 2.9 | 4400 | 0.3492 | 0.2968 |
| 0.2484 | 3.16 | 4800 | 0.3647 | 0.2993 |
| 0.2371 | 3.43 | 5200 | 0.3601 | 0.2942 |
| 0.2382 | 3.69 | 5600 | 0.3656 | 0.3019 |
| 0.2425 | 3.96 | 6000 | 0.3379 | 0.2873 |
| 0.2092 | 4.22 | 6400 | 0.3385 | 0.2736 |
| 0.2171 | 4.48 | 6800 | 0.3503 | 0.2889 |
| 0.2185 | 4.75 | 7200 | 0.3289 | 0.2727 |
| 0.2236 | 5.01 | 7600 | 0.3447 | 0.2771 |
| 0.1882 | 5.27 | 8000 | 0.3586 | 0.2860 |
| 0.1986 | 5.54 | 8400 | 0.3404 | 0.2829 |
| 0.2055 | 5.8 | 8800 | 0.3561 | 0.2869 |
| 0.196 | 6.06 | 9200 | 0.3633 | 0.2811 |
| 0.1748 | 6.33 | 9600 | 0.3703 | 0.2818 |
| 0.1758 | 6.59 | 10000 | 0.3525 | 0.2816 |
| 0.1819 | 6.86 | 10400 | 0.3581 | 0.2765 |
| 0.1715 | 7.12 | 10800 | 0.3480 | 0.2628 |
| 0.1606 | 7.38 | 11200 | 0.3490 | 0.2703 |
| 0.1632 | 7.65 | 11600 | 0.3461 | 0.2706 |
| 0.1638 | 7.91 | 12000 | 0.3458 | 0.2673 |
| 0.1552 | 8.17 | 12400 | 0.3646 | 0.2732 |
| 0.154 | 8.44 | 12800 | 0.3706 | 0.2726 |
| 0.1512 | 8.7 | 13200 | 0.3609 | 0.2683 |
| 0.149 | 8.97 | 13600 | 0.3610 | 0.2668 |
| 0.1357 | 9.23 | 14000 | 0.3693 | 0.2740 |
| 0.1375 | 9.49 | 14400 | 0.3677 | 0.2625 |
| 0.1391 | 9.76 | 14800 | 0.3795 | 0.2762 |
| 0.1378 | 10.02 | 15200 | 0.3541 | 0.2592 |
| 0.1197 | 10.28 | 15600 | 0.3562 | 0.2507 |
| 0.1259 | 10.55 | 16000 | 0.3612 | 0.2584 |
| 0.1266 | 10.81 | 16400 | 0.3470 | 0.2527 |
| 0.1199 | 11.07 | 16800 | 0.3721 | 0.2571 |
| 0.1157 | 11.34 | 17200 | 0.3734 | 0.2571 |
| 0.1107 | 11.6 | 17600 | 0.3730 | 0.2589 |
| 0.1148 | 11.87 | 18000 | 0.3648 | 0.2536 |
| 0.1095 | 12.13 | 18400 | 0.3746 | 0.2521 |
| 0.1047 | 12.39 | 18800 | 0.3566 | 0.2530 |
| 0.1043 | 12.66 | 19200 | 0.3794 | 0.2545 |
| 0.1066 | 12.92 | 19600 | 0.3548 | 0.2439 |
| 0.0974 | 13.18 | 20000 | 0.3702 | 0.2461 |
| 0.0978 | 13.45 | 20400 | 0.3721 | 0.2492 |
| 0.095 | 13.71 | 20800 | 0.3599 | 0.2467 |
| 0.0963 | 13.97 | 21200 | 0.3650 | 0.2402 |
| 0.0902 | 14.24 | 21600 | 0.3689 | 0.2459 |
| 0.0898 | 14.5 | 22000 | 0.3832 | 0.2452 |
| 0.0865 | 14.77 | 22400 | 0.3982 | 0.2436 |
| 0.0911 | 15.03 | 22800 | 0.3785 | 0.2398 |
| 0.0793 | 15.29 | 23200 | 0.3731 | 0.2396 |
| 0.0806 | 15.56 | 23600 | 0.3626 | 0.2372 |
| 0.0789 | 15.82 | 24000 | 0.3707 | 0.2356 |
| 0.0779 | 16.08 | 24400 | 0.3850 | 0.2368 |
| 0.078 | 16.35 | 24800 | 0.3831 | 0.2363 |
| 0.0732 | 16.61 | 25200 | 0.3947 | 0.2287 |
| 0.0733 | 16.88 | 25600 | 0.3928 | 0.2374 |
| 0.0721 | 17.14 | 26000 | 0.3943 | 0.2324 |
| 0.0676 | 17.4 | 26400 | 0.3793 | 0.2311 |
| 0.0682 | 17.67 | 26800 | 0.3958 | 0.2257 |
| 0.0714 | 17.93 | 27200 | 0.3890 | 0.2322 |
| 0.0673 | 18.19 | 27600 | 0.3872 | 0.2229 |
| 0.0613 | 18.46 | 28000 | 0.3828 | 0.2226 |
| 0.0621 | 18.72 | 28400 | 0.3812 | 0.2214 |
| 0.0622 | 18.98 | 28800 | 0.3919 | 0.2212 |
| 0.0576 | 19.25 | 29200 | 0.4000 | 0.2205 |
| 0.0581 | 19.51 | 29600 | 0.3953 | 0.2203 |
| 0.0573 | 19.78 | 30000 | 0.3947 | 0.2190 |
| 0.0576 | 20.04 | 30400 | 0.3909 | 0.2156 |
| 0.0551 | 20.3 | 30800 | 0.4178 | 0.2153 |
| 0.0525 | 20.57 | 31200 | 0.3935 | 0.2152 |
| 0.0522 | 20.83 | 31600 | 0.4054 | 0.2151 |
| 0.0519 | 21.09 | 32000 | 0.3877 | 0.2135 |
| 0.0479 | 21.36 | 32400 | 0.4119 | 0.2107 |
| 0.0472 | 21.62 | 32800 | 0.3967 | 0.2091 |
| 0.048 | 21.89 | 33200 | 0.3812 | 0.2057 |
| 0.0458 | 22.15 | 33600 | 0.3931 | 0.2043 |
| 0.0459 | 22.41 | 34000 | 0.3937 | 0.2049 |
| 0.0448 | 22.68 | 34400 | 0.3900 | 0.2056 |
| 0.0432 | 22.94 | 34800 | 0.4050 | 0.2049 |
| 0.0425 | 23.2 | 35200 | 0.3985 | 0.2014 |
| 0.0415 | 23.47 | 35600 | 0.3976 | 0.2013 |
| 0.0403 | 23.73 | 36000 | 0.4031 | 0.2018 |
| 0.04 | 23.99 | 36400 | 0.3996 | 0.2000 |
| 0.039 | 24.26 | 36800 | 0.3977 | 0.1993 |
| 0.0406 | 24.52 | 37200 | 0.3967 | 0.2000 |
| 0.0391 | 24.79 | 37600 | 0.3986 | 0.1980 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
uclanlp/plbart-single_task-en_python | cc76056cf7f412000e4d5470baea8540f3c9e60c | 2022-03-02T07:07:37.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-en_python | 25 | null | transformers | 7,687 | Entry not found |
ufal/byt5-small-multilexnorm2021-es | 25efaf304522e126f10d521a13e333b78df063f1 | 2021-10-20T12:22:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"transformers",
"lexical normalization",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | ufal | null | ufal/byt5-small-multilexnorm2021-es | 25 | 1 | transformers | 7,688 | ---
language: es
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Spanish version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
Jorgeutd/bert-base-uncased-finetuned-surveyclassification | c00496861cfbbb36e2c418b395ad9de4ef8ac765 | 2022-02-24T16:34:18.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Jorgeutd | null | Jorgeutd/bert-base-uncased-finetuned-surveyclassification | 25 | null | transformers | 7,689 | ---
license: apache-2.0
tags:
- generated_from_trainer
language: en
widget:
- text: "The agent on the phone was very helpful and nice to me."
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-surveyclassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-surveyclassification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a custom survey dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2818
- Accuracy: 0.9097
- F1: 0.9097
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.
#### How to use
You can use this model with Transformers *pipeline* for Text Classification.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
model = AutoModelForSequenceClassification.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
text_classifier = pipeline("text-classification", model=model,tokenizer=tokenizer, device=0)
example = "The agent on the phone was very helpful and nice to me."
results = text_classifier(example)
print(results)
```
## Training and evaluation data
Custom survey dataset.
## Training procedure
SageMaker notebook instance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4136 | 1.0 | 902 | 0.2818 | 0.9097 | 0.9097 |
| 0.2213 | 2.0 | 1804 | 0.2990 | 0.9077 | 0.9077 |
| 0.1548 | 3.0 | 2706 | 0.3507 | 0.9026 | 0.9026 |
| 0.1034 | 4.0 | 3608 | 0.4692 | 0.9011 | 0.9011 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
niksmer/PolicyBERTa-7d | 68d7480405941217d63f6a4dce9b1ec4a3ca9889 | 2022-03-24T09:19:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers",
"license:mit",
"model-index"
] | text-classification | false | niksmer | null | niksmer/PolicyBERTa-7d | 25 | null | transformers | 7,690 | ---
license: mit
language:
- en
metrics:
- accuracy
- precision
- recall
model-index:
- name: PolicyBERTa-7d
results: []
widget:
- text: "Russia must end the war."
- text: "Democratic institutions must be supported."
- text: "The state must fight political corruption."
- text: "Our energy economy must be nationalised."
- text: "We must increase social spending."
---
# PolicyBERTa-7d
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on data from the [Manifesto Project](https://manifesto-project.wzb.eu/). It was inspired by the model from [Laurer (2020)](https://huggingface.co/MoritzLaurer/policy-distilbert-7d).
It achieves the following results on the evaluation set:
- Loss: 0.8549
- Accuracy: 0.7059
- F1-micro: 0.7059
- F1-macro: 0.6683
- F1-weighted: 0.7033
- Precision: 0.7059
- Recall: 0.7059
## Model description
This model was trained on 115,943 manually annotated sentences to classify text into one of seven political categories: "external relations", "freedom and democracy", "political system", "economy", "welfare and quality of life", "fabric of society" and "social groups".
## Intended uses & limitations
The model output reproduces the limitations of the dataset in terms of country coverage, time span, domain definitions and potential biases of the annotators - as any supervised machine learning model would. Applying the model to other types of data (other types of texts, countries etc.) will reduce performance.
```python
from transformers import pipeline
import pandas as pd
classifier = pipeline(
task="text-classification",
model="niksmer/PolicyBERTa-7d")
# Load text data you want to classify
text = pd.read_csv("example.csv")["text_you_want_to_classify"].to_list()
# Inference
output = classifier(text)
# Print output
pd.DataFrame(output).head()
```
## Training and evaluation data
PolicyBERTa-7d was trained on the English-speaking subset of the [Manifesto Project Dataset (MPDS2021a)](https://manifesto-project.wzb.eu/datasets). The model was trained on 115,943 sentences from 163 political manifestos in 7 English-speaking countries (Australia, Canada, Ireland, New Zealand, South Africa, United Kingdom, United States). The manifestos were published between 1992 - 2020.
| Country | Count manifestos | Count sentences | Time span |
|----------------|------------------|-----------------|--------------------|
| Australia | 18 | 14,887 | 2010-2016 |
| Ireland | 23 | 24,966 | 2007-2016 |
| Canada | 14 | 12,344 | 2004-2008 & 2015 |
| New Zealand | 46 | 35,079 | 1993-2017 |
| South Africa | 29 | 13,334 | 1994-2019 |
| USA | 9 | 13,188 | 1992 & 2004-2020 |
| United Kingdom | 34 | 30,936 | 1997-2019 |
Canadian manifestos between 2004 and 2008 are used as test data.
The Manifesto Project mannually annotates individual sentences from political party manifestos in 7 main political domains: 'Economy', 'External Relations', 'Fabric of Society', 'Freedom and Democracy', 'Political System', 'Welfare and Quality of Life' or 'Social Groups' - see the [codebook](https://manifesto-project.wzb.eu/down/papers/handbook_2021_version_5.pdf) for the exact definitions of each domain.
### Tain data
Train data was higly imbalanced.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | external relations | 7,640 |
| 1 | freedom and democracy | 5,880 |
| 2 | political system | 11,234 |
| 3 | economy | 29,218 |
| 4 | welfare and quality of life | 37,200 |
| 5 | fabric of society | 13,594 |
| 6 | social groups | 11,177 |
Overall count: 115,943
### Validation data
The validation was created by chance.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | external relations | 1,345 |
| 1 | freedom and democracy | 1,043 |
| 2 | political system | 2,038 |
| 3 | economy | 5,140 |
| 4 | welfare and quality of life | 6,554 |
| 5 | fabric of society | 2,384 |
| 6 | social groups | 1,957 |
Overall count: 20,461
## Test data
The test dataset contains ten canadian manifestos between 2004 and 2008.
| Label | Description | Count |
|------------|--------------|--------|
| 0 | external relations | 824 |
| 1 | freedom and democracy | 296 |
| 2 | political system | 1,041 |
| 3 | economy | 2,188 |
| 4 | welfare and quality of life | 2,654 |
| 5 | fabric of society | 940 |
| 6 | social groups | 387 |
Overall count: 8,330
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
```
training_args = TrainingArguments(
warmup_steps=0,
weight_decay=0.1,
learning_rate=1e-05,
fp16 = True,
evaluation_strategy="epoch",
num_train_epochs=5,
per_device_train_batch_size=16,
overwrite_output_dir=True,
per_device_eval_batch_size=16,
save_strategy="no",
logging_dir='logs',
logging_strategy= 'steps',
logging_steps=10,
push_to_hub=True,
hub_strategy="end")
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-micro | F1-macro | F1-weighted | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:-----------:|:---------:|:------:|
| 0.9154 | 1.0 | 1812 | 0.8984 | 0.6785 | 0.6785 | 0.6383 | 0.6772 | 0.6785 | 0.6785 |
| 0.8374 | 2.0 | 3624 | 0.8569 | 0.6957 | 0.6957 | 0.6529 | 0.6914 | 0.6957 | 0.6957 |
| 0.7053 | 3.0 | 5436 | 0.8582 | 0.7019 | 0.7019 | 0.6594 | 0.6967 | 0.7019 | 0.7019 |
| 0.7178 | 4.0 | 7248 | 0.8488 | 0.7030 | 0.7030 | 0.6662 | 0.7011 | 0.7030 | 0.7030 |
| 0.6688 | 5.0 | 9060 | 0.8549 | 0.7059 | 0.7059 | 0.6683 | 0.7033 | 0.7059 | 0.7059 |
### Validation evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| PolicyBERTa-7d | 0.71 | 0.67 | 0.70 |
### Test evaluation
| Model | Micro F1-Score | Macro F1-Score | Weighted F1-Score |
|----------------|----------------|----------------|-------------------|
| PolicyBERTa-7d | 0.65 | 0.60 | 0.65 |
### Evaluation per category
| Label | Validation F1-Score | Test F1-Score |
|-----------------------------|---------------------|---------------|
| external relations | 0.76 | 0.70 |
| freedom and democracy | 0.61 | 0.55 |
| political system | 0.55 | 0.55 |
| economy | 0.74 | 0.67 |
| welfare and quality of life | 0.77 | 0.72 |
| fabric of society | 0.67 | 0.60 |
| social groups | 0.58 | 0.41 |
### Evaluation based on saliency theory
Saliency theory is a theory to analyse politial text data. In sum, parties tend to write about policies in which they think that they are seen as competent.
Voters tend to assign advantages in policy competence in line to the assumed ideology of parties. Therefore you can analyze the share of policies parties tend to write about in their manifestos to analyze the party ideology.
The Manifesto Project presented for such an analysis the rile-index. For a quick overview, check [this](https://manifesto-project.wzb.eu/down/tutorials/main-dataset.html#measuring-parties-left-right-positions). But PolicyBERTa isn't fine-tuned to predict the rile-index, if you're interested in that, check [ManiBERT](https://huggingface.co/niksmer/ManiBERT) or [RoBERTa-RILE](https://huggingface.co/niksmer/RoBERTa-RILE).
In the following table, the predicted and original share of the individual policy domains are shown per manifesto in the test dataset. Overall the pearson correlation between the predicted and original shares is 0.965.
| Party-ID | Year | Type | Share external relations | Share freedom and democracy | Share political system | Share economy | Share welfare and quality of life | Share fabric of society | Share social groups |
|--------------|-------------|---------------|--------------------------|-----------------------------|------------------------|----------------|-----------------------------------|-------------------------|---------------------|
| 62320 | 2004 | Predicted | 7.1% | 4.8% | 13.2% | 20.3% | 35.2% | 9.6% | 9.8% |
| | | Original | 10.2% | 2.5% | 13.7% | 23.8% | 31.7% | 11.6% | 6.4% |
| 62320 | 2006 | Predicted | 2.9% | 4.7% | 16.4% | 18.9% | 38.3% | 11.9% | 6.9% |
| | | Original | 5.6% | 5.0% | 15.8% | 20.7% | 38.7% | 9.3% | 4.9% |
| 62320 | 2008 | Predicted | 6.8% | 4.7% | 6.2% | 24.7% | 38.3% | 10.3% | 9.0% |
| | | Original | 5.6% | 3.7% | 8.2% | 33.1% | 29.5% | 11.7% | 4.3% |
| 62420 | 2004 | Predicted | 9.7% | 3.5% | 14.5% | 24.7% | 34.8% | 8.5% | 4.3% |
| | | Original | 12.6% | 1.3% | 18.8% | 23.0% | 33.2% | 9.0% | 2.0% |
| 62420 | 2006 | Predicted | 9.5% | 2.2% | 7.9% | 27.8% | 34.8% | 9.2% | 8.7% |
| | | Original | 10.6% | 2.5% | 9.6% | 29.7% | 33.1% | 8.3% | 6.2% |
| 62420 | 2008 | Predicted | 0.7% | 0.5% | 3.5% | 41.7% | 46.4% | 3.7% | 3.5% |
| | | Original | 2.0% | 0.2% | 4.4% | 33.3% | 45.9% | 7.7% | 6.4% |
| 62623 | 2004 | Predicted | 7.1% | 11.4% | 24.5% | 17.6% | 21.5% | 13.6% | 4.3% |
| | | Original | 8.4% | 6.7% | 28.8% | 17.4% | 18.7% | 15.5% | 4.5% |
| 62623 | 2006 | Predicted | 5.6% | 8.5% | 23.6% | 15.6% | 14.8% | 24.3% | 7.6% |
| | | Original | 5.0% | 8.9% | 22.2% | 17.4% | 17.2% | 25.7% | 3.6% |
| 62623 | 2008 | Predicted | 5.0% | 4.4% | 12.2% | 33.1% | 21.9% | 17.5% | 5.9% |
| | | Original | 5.6% | 2.2% | 11.6% | 37.8% | 17.8% | 20.9% | 4.1% |
| 62110 | 2008 | Predicted | 10.0% | 3.1% | 6.8% | 22.7% | 41.3% | 10.1% | 6.0% |
| | | Original | 13.4% | 3.3% | 7.7% | 26.9% | 35.6% | 8.9% | 4.3% |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
|
lgris/base_10k_8khz_pt | 52b834283f4171b6f693ae0fd481b13298b883ce | 2022-02-07T11:53:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"transformers",
"audio",
"speech",
"portuguese-speech-corpus",
"PyTorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | lgris | null | lgris/base_10k_8khz_pt | 25 | null | transformers | 7,691 | ---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# Wav2vec 2.0 for Portuguese in 8kHz
This is a fine-tuned model from [facebook/wav2vec2-base-10k-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli)
Datasets used to fine-tune the model:
CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz
VoxPopuli |
DMetaSoul/sbert-chinese-qmc-finance-v1 | b1d140d61af1efc18f23fe4266b5bd74042a3df3 | 2022-04-04T07:21:28.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese"
] | sentence-similarity | false | DMetaSoul | null | DMetaSoul/sbert-chinese-qmc-finance-v1 | 25 | null | sentence-transformers | 7,692 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-qmc-finance-v1
此模型基于 [bert-base-chinese](https://huggingface.co/bert-base-chinese) 版本 BERT 模型,在大规模银行问题匹配数据集([BQCorpus](http://icrc.hitsz.edu.cn/info/1037/1162.htm))上进行训练调优,适用于**金融领域的问题匹配**场景,比如:
- 8千日利息400元? VS 10000元日利息多少钱
- 提前还款是按全额计息 VS 还款扣款不成功怎么还款?
- 为什么我借钱交易失败 VS 刚申请的借款为什么会失败
注:此模型的[轻量化版本](https://huggingface.co/DMetaSoul/sbert-chinese-qmc-finance-v1-distill),也已经开源啦!
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["到期不能按时还款怎么办", "剩余欠款还有多少?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-qmc-finance-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["到期不能按时还款怎么办", "剩余欠款还有多少?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-qmc-finance-v1')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-qmc-finance-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
该模型在公开的几个语义匹配数据集上进行了评测,计算了向量相似度跟真实标签之间的相关性系数:
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** |
| -------------------------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- |
| **sbert-chinese-qmc-finance-v1** | 77.40% | 74.55% | 36.01% | 75.75% | 73.25% | 11.58% | 54.76% |
## Citing & Authors
E-mail: [email protected] |
everdoubling/byt5-Korean-base | f1038830499bfc8c87bb4da48e4f0c85f715b87c | 2022-05-29T08:35:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | everdoubling | null | everdoubling/byt5-Korean-base | 25 | 1 | transformers | 7,693 | ---
datasets:
- mc4
license: apache-2.0
---
# ByT5-Korean - base
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
## Encoding Scheme
```text
id: token
0: <pad>
1: <eos>
2: <unk>
3~258: utf-8 encoding
259~277: beginning consonants(초성), 19개(ㄱㄲㄴㄷㄸㄹㅁㅂㅃㅅㅆㅇㅈㅉㅊㅋㅌㅍㅎ)
278~298: middle vowel(중성), 21개(ㅏㅐㅑㅒㅓㅔㅕㅖㅗㅘㅙㅚㅛㅜㅝㅞㅟㅠㅡㅢㅣ)
299~326: final consonant(종성), 무종성+27개(ㄱㄲㄳㄴㄵㄶㄷㄹㄺㄻㄼㄽㄾㄿㅀㅁㅂㅄㅅㅆㅇㅈㅊㅋㅌㅍㅎ)
327~384: from <extra_id_0> to <extra_id_57>
```
## Example Inference
```python
import torch
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-base/blob/main/tokenizer.py
from transformers import T5ForConditionalGeneration
tokenizer_jamo = ByT5KoreanTokenizer()
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-base')
input_sentence = '한국어 위키백과(영어: Korean Wikipedia)는 한국어로 운영되는 위키백과의 다언어판 가운데 하나로서, 2002년 10월 11일에 <extra_id_0>. 또한 현재 한국어 위키백과에는 넘겨주기, 토론, 그림 등 페이지로 불리는 모든 문서를 포함하면 총 2,629,860개가 <extra_id_1>되어 있으며, 넘겨주기를 포함한 일반 문서 수는 1,278,560개,[1] 그중 넘겨주기, 막다른 문서를 제외한 일반 문서 수는 573,149개이다.'
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
print(tokenizer_jamo.decode(outputs_jamo[0]))
# <pad><extra_id_0>설립되었다<extra_id_1>đě
```
Additional information coming soon...
|
Yoonseong/climatebert_trained | dc96ec8587338909ec23cd4db828b448f23220b6 | 2022-05-19T00:53:38.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | Yoonseong | null | Yoonseong/climatebert_trained | 25 | null | transformers | 7,694 | ---
license: mit
---
|
hackathon-pln-es/t5-small-spanish-nahuatl | 814181b8d71b304f7e64a2d5dce1fac5c663b94f | 2022-07-28T03:09:47.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"nah",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | hackathon-pln-es | null | hackathon-pln-es/t5-small-spanish-nahuatl | 25 | 7 | transformers | 7,695 | ---
license: apache-2.0
language:
- es
- nah
tags:
- translation
widget:
- text: "translate Spanish to Nahuatl: Mi hermano es un ajolote"
---
# t5-small-spanish-nahuatl
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is challenging due to the lack of structured data. The most popular datasets, such as the Axolot and bible-corpus, only consist of ~16,000 and ~7,000 samples, respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, it is possible to find a single word from the Axolot dataset written in more than three different ways. Therefore, we leverage the T5 text-to-text prefix training strategy to compensate for the lack of data. We first train the multilingual model to learn Spanish and then adapt it to Nahuatl. The resulting T5 Transformer successfully translates short sentences. Finally, we report Chrf and BLEU results.
## Model description
This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on Spanish and Nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
## Usage
```python
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
model.eval()
sentence = 'muchas flores son blancas'
input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
# outputs = miak xochitl istak
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
```
## Approach
### Dataset
Since the Axolotl corpus contains misalignments, we select the best samples (12,207). We also use the [bible-corpus](https://github.com/christos-c/bible-corpus) (7,821).
| Axolotl best aligned books |
|:-----------------------------------------------------:|
| Anales de Tlatelolco |
| Diario |
| Documentos nauas de la Ciudad de México del siglo XVI |
| Historia de México narrada en náhuatl y español |
| La tinta negra y roja (antología de poesía náhuatl) |
| Memorial Breve (Libro las ocho relaciones) |
| Método auto-didáctico náhuatl-español |
| Nican Mopohua |
| Quinta Relación (Libro las ocho relaciones) |
| Recetario Nahua de Milpa Alta D.F |
| Testimonios de la antigua palabra |
| Trece Poetas del Mundo Azteca |
| Una tortillita nomás - Se taxkaltsin saj |
| Vida económica de Tenochtitlan |
Also, we collected 3,000 extra samples from the web to increase the data.
### Model and training
We employ two training stages using a multilingual T5-small. The advantage of this model is that it can handle different vocabularies and prefixes. T5-small is pre-trained on different tasks and languages (French, Romanian, English, German).
### Training-stage 1 (learning Spanish)
In training stage 1, we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. The model is trained till convergence, adding the prefix "Translate Spanish to English: "
### Training-stage 2 (learning Nahuatl)
We use the pre-trained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add 20,000 samples from the English-Spanish Anki dataset. This two-task training avoids overfitting and makes the model more robust.
### Training setup
We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.
## Evaluation results
We evaluate the models on the same 505 validation Nahuatl sentences for a fair comparison. Finally, we report the results using chrf and sacrebleu hugging face metrics:
| English-Spanish pretraining | Validation loss | BLEU | Chrf |
|:----------------------------:|:---------------:|:-----|-------:|
| False | 1.34 | 6.17 | 26.96 |
| True | 1.31 | 6.18 | 28.21 |
The English-Spanish pretraining improves BLEU and Chrf and leads to faster convergence. The evaluation is available on the [eval.ipynb](https://github.com/milmor/spanish-nahuatl-translation/blob/main/eval.ipynb) notebook.
## References
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified Text-to-Text transformer.
- Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
- https://github.com/christos-c/bible-corpus
- https://github.com/ElotlMX/py-elotl
## Team members
- Emilio Alejandro Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv) |
xaqren/sentiment_analysis | b539ca0c9dd51e962fabde4f7c4093ff0f185466 | 2022-04-08T14:59:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Confidential",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0"
] | text-classification | false | xaqren | null | xaqren/sentiment_analysis | 25 | 1 | transformers | 7,696 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- Confidential
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model description [xaqren/sentiment_analysis]
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for
further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification. |
nielsr/sidewalk-semantic-demo | 2ef7e0a35b87979f4e72ca1bf8e46f5410edbeb9 | 2022-04-06T15:53:42.000Z | [
"pytorch",
"tensorboard",
"segformer",
"dataset:segments/sidewalk-semantic",
"transformers",
"vision",
"generated_from_trainer",
"image-segmentation",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | nielsr | null | nielsr/sidewalk-semantic-demo | 25 | null | transformers | 7,697 | ---
license: apache-2.0
tags:
- vision
- generated_from_trainer
- image-segmentation
datasets:
- segments/sidewalk-semantic
model-index:
- name: sidewalk-semantic-demo
results: []
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sidewalk-semantic-demo
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7591
- Mean Iou: 0.1135
- Mean Accuracy: 0.1608
- Overall Accuracy: 0.6553
- Per Category Iou: [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0]
- Per Category Accuracy: [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 2.3589 | 1.0 | 53 | 1.9020 | 0.1014 | 0.1491 | 0.6442 | [0.0, 0.3612513514640175, 0.6751826209974531, 0.0, 0.030376890155720412, 0.0008039971158010613, nan, 2.235273737210043e-05, 0.0, 0.0, 0.5369771616036864, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4924640887729494, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5705205266526164, 0.07944837262494953, 0.5986634961452602, 0.0, 0.0, 0.00011218284533795612, 0.0] | [nan, 0.523053840654786, 0.9469253318772407, 0.0, 0.030589314463641413, 0.0008054985216698098, nan, 2.2371239534454507e-05, 0.0, 0.0, 0.8528562962514211, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7547252442297603, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9698553453075568, 0.08054302832748386, 0.6107703679316233, 0.0, 0.0, 0.00011444735961303836, 0.0] |
| 2.1214 | 2.0 | 106 | 1.7800 | 0.1158 | 0.1627 | 0.6622 | [nan, 0.3912271306195065, 0.7114203717790301, 0.0001503748092119608, 0.04491329385698775, 0.0008871978593462472, nan, 1.3975654410017748e-06, 0.0, 0.0, 0.5167420849064452, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.49676247687874375, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5965069148571663, 0.3115535309159788, 0.636016670211685, 0.0, 0.0, 0.0, 0.0] | [nan, 0.6306423988442347, 0.9198450793635351, 0.0001503748092119608, 0.045391490029595895, 0.0008886008009872551, nan, 1.3982024709034067e-06, 0.0, 0.0, 0.8587918189550764, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8103648148965297, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9600035488335386, 0.3307256120335472, 0.6505175702762634, 0.0, 0.0, 0.0, 0.0] |
| 1.9022 | 3.0 | 159 | 1.7591 | 0.1135 | 0.1608 | 0.6553 | [nan, 0.38512238586129177, 0.723869670479682, 3.007496184239216e-05, 0.04329871029371091, 0.0006725029325634934, nan, 0.0, 0.0, 0.0, 0.5420712902837528, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.4939727049879936, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.5630706428968278, 0.2911849732223226, 0.5899473333836793, 0.0, 0.0, 1.723395088323998e-05, 0.0] | [nan, 0.6995968221991989, 0.8870903675336742, 3.007496184239216e-05, 0.043772127605383085, 0.0006731284624713075, nan, 0.0, 0.0, 0.0, 0.8074880705716012, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8257698903048035, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.9746918606102934, 0.3057553223999185, 0.6001142624744604, 0.0, 0.0, 1.7275073149137866e-05, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-et-en | 294ac8515bf556def3b8ee4a0c5927bef475b726 | 2022-06-01T12:59:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"et",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-et-en | 25 | null | transformers | 7,698 | ---
language:
- en
- et
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-et-en
results:
- task:
name: Translation est-eng
type: translation
args: est-eng
dataset:
name: flores101-devtest
type: flores_101
args: est eng devtest
metrics:
- name: BLEU
type: bleu
value: 38.6
- task:
name: Translation est-eng
type: translation
args: est-eng
dataset:
name: newsdev2018
type: newsdev2018
args: est-eng
metrics:
- name: BLEU
type: bleu
value: 33.8
- task:
name: Translation est-eng
type: translation
args: est-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: est-eng
metrics:
- name: BLEU
type: bleu
value: 59.7
- task:
name: Translation est-eng
type: translation
args: est-eng
dataset:
name: newstest2018
type: wmt-2018-news
args: est-eng
metrics:
- name: BLEU
type: bleu
value: 34.3
---
# opus-mt-tc-big-et-en
Neural machine translation model for translating from Estonian (et) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): est
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT est-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/est-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Takso ootab.",
"Kon sa elät?"
]
model_name = "pytorch-models/opus-mt-tc-big-et-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Taxi's waiting.
# Kon you elät?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-et-en")
print(pipe("Takso ootab."))
# expected output: Taxi's waiting.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| est-eng | tatoeba-test-v2021-08-07 | 0.73707 | 59.7 | 1359 | 8811 |
| est-eng | flores101-devtest | 0.64463 | 38.6 | 1012 | 24721 |
| est-eng | newsdev2018 | 0.59899 | 33.8 | 2000 | 43068 |
| est-eng | newstest2018 | 0.60708 | 34.3 | 2000 | 45405 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:54:11 EEST 2022
* port machine: LM0-400-22516.local
|
Barkavi/totto-t5-base-bleu-121K | 7b7c6dba3435dc61537b2bd1a422b65d4c517c1d | 2022-04-30T14:50:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Barkavi | null | Barkavi/totto-t5-base-bleu-121K | 25 | null | transformers | 7,699 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.