modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue3 | 283b0a2b23e5926f615b9eb058a7a177668e9adc | 2022-06-04T14:24:23.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | miazhao | null | miazhao/deberta_base_model_s3_ccnet_airbnb_dat_continue3 | 17 | null | transformers | 9,100 | Entry not found |
facebook/levit-192 | b92b6fda489d116d82b1ae9d4e3b19620b319685 | 2022-06-01T13:20:39.000Z | [
"pytorch",
"levit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.01136",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/levit-192 | 17 | null | transformers | 9,101 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# LeViT
LeViT-192 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-192')
model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-192')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
kktoto/tiny_bb_wd | 6dbbfb6437c501ab5a7fba5d2798dc5ff4093156 | 2022-06-02T08:06:44.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | kktoto | null | kktoto/tiny_bb_wd | 17 | null | transformers | 9,102 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_bb_wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_bb_wd
This model is a fine-tuned version of [kktoto/tiny_bb_wd](https://huggingface.co/kktoto/tiny_bb_wd) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1331
- Precision: 0.6566
- Recall: 0.6502
- F1: 0.6533
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1193 | 1.0 | 5561 | 0.1398 | 0.6406 | 0.6264 | 0.6335 | 0.9501 |
| 0.1259 | 2.0 | 11122 | 0.1343 | 0.6476 | 0.6300 | 0.6387 | 0.9509 |
| 0.1283 | 3.0 | 16683 | 0.1333 | 0.6484 | 0.6367 | 0.6425 | 0.9512 |
| 0.1217 | 4.0 | 22244 | 0.1325 | 0.6524 | 0.6380 | 0.6451 | 0.9516 |
| 0.12 | 5.0 | 27805 | 0.1337 | 0.6571 | 0.6377 | 0.6472 | 0.9522 |
| 0.1187 | 6.0 | 33366 | 0.1319 | 0.6630 | 0.6297 | 0.6459 | 0.9525 |
| 0.116 | 7.0 | 38927 | 0.1318 | 0.6600 | 0.6421 | 0.6509 | 0.9525 |
| 0.1125 | 8.0 | 44488 | 0.1337 | 0.6563 | 0.6481 | 0.6522 | 0.9523 |
| 0.1118 | 9.0 | 50049 | 0.1329 | 0.6575 | 0.6477 | 0.6526 | 0.9524 |
| 0.1103 | 10.0 | 55610 | 0.1331 | 0.6566 | 0.6502 | 0.6533 | 0.9524 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RUCAIBox/mvp-question-answering | 4c2aabefbd0aa809745251eb51e6d7ddbc4efa40 | 2022-06-27T02:28:05.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mvp-question-answering | 17 | 1 | transformers | 9,103 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Example1"
- text: "Answer the following question: what is ce certified [X_SEP] The CE marking is the manufacturer's declaration that the product meets the requirements of the applicable EC directives. Officially, CE is an abbreviation of Conformite Conformité, europeenne Européenne Meaning. european conformity"
example_title: "Example2"
---
# MVP-question-answering
The MVP-question-answering model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-question-answering is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question answering datasets. It is a variant (MVP+S) of our [MVP](https://huggingface.co/RUCAIBox/mvp) [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-answering")
>>> inputs = tokenizer(
... "Answer the following question: From which country did Angola achieve independence in 1975?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
Massinissa/Jeux2BERT | 2299c0c1231c56390358f5f52325af785982fc5e | 2022-06-22T12:45:12.000Z | [
"pytorch",
"flaubert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Massinissa | null | Massinissa/Jeux2BERT | 17 | null | transformers | 9,104 | # Jeux2BERT
Jeux2BERT is a Flaubert language model augmented by the lexico-semantic network JeuxDeMots.
Thus, this model tries to capture the distributional and relational properties of words, but also tries to discriminate the different relational properties between words or syntagms.
The Web application includes three Tasks : Link Prediction (Classification de triplets), Relation Prediction (Prédiction de Relation) and Triple Ranking (Classement de triplets)
# Web App
[https://github.com/atmani-massinissa/Jeux2BERT_APP/tree/main]
# Demo
[https://share.streamlit.io/atmani-massinissa/jeux2bert_app/main/app.py?page=Classement+de+triplets]
The task Triple Ranking (Classement de triplets) don't run smoothly on the streamlit server because of the inference's time, so it's better to run it locally instead on the demo's server. |
Yip/bert-finetuned-ner-chinese | 51b99198350d84f099ef8962306269a0bc4f8bf6 | 2022-06-09T07:01:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Yip | null | Yip/bert-finetuned-ner-chinese | 17 | null | transformers | 9,105 | Entry not found |
ghadeermobasher/Original-SciBERT-BC5CDR-Disease | 3886c9bf3c376e41df397f76d281c98fb54b1e77 | 2022-06-09T11:32:31.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC5CDR-Disease | 17 | null | transformers | 9,106 | Entry not found |
ghadeermobasher/Original-SciBERT-BC4CHEMD | 6b851f7c3b26b9f9a05a2748a205652a29d291da | 2022-06-09T19:35:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC4CHEMD | 17 | null | transformers | 9,107 | Entry not found |
ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T2 | 816382ae0281a692800e63a49a3df99fe5b5e042 | 2022-06-09T18:25:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/Original-SciBERT-BC5CDR-Chemical-T2 | 17 | null | transformers | 9,108 | Entry not found |
course5i/SEAD-L-6_H-384_A-12-mnli | 9052bac862ae36c60025a64a7c888730d603376a | 2022-06-12T22:59:51.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:mnli",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-mnli | 17 | null | transformers | 9,109 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- mnli
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-mnli
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_m-accuracy | eval_m-runtime | eval_m-samples_per_second | eval_m-steps_per_second | eval_m-loss | eval_m-samples | eval_mm-accuracy | eval_mm-runtime | eval_mm-samples_per_second | eval_mm-steps_per_second | eval_mm-loss | eval_mm-samples |
|:---------------:|:--------------:|:-------------------------:|:-----------------------:|:-----------:|:--------------:|:----------------:|:---------------:|:--------------------------:|:------------------------:|:------------:|:---------------:|
| 0.8495 | 6.5443 | 1499.776 | 46.911 | 0.4366 | 9815 | 0.8508 | 5.6975 | 1725.678 | 54.059 | 0.4252 | 9832 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
EddieChen372/opt-350m-finetuned-jest | 3dc4755045e4542f160b047d993af4c98430199b | 2022-06-17T15:42:01.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers"
] | text-generation | false | EddieChen372 | null | EddieChen372/opt-350m-finetuned-jest | 17 | null | transformers | 9,110 | Entry not found |
ghadeermobasher/CRAFT-Original-SciBERT-384 | b27b834167027381681297aca957e9e02d89c14a | 2022-06-13T23:01:55.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/CRAFT-Original-SciBERT-384 | 17 | null | transformers | 9,111 | Entry not found |
armandnlp/distilbert-base-uncased-finetuned-emotion | a8ceec6943d0eecbb83dcae6d184416a1d27147d | 2022-07-13T15:42:39.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | armandnlp | null | armandnlp/distilbert-base-uncased-finetuned-emotion | 17 | null | transformers | 9,112 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9273822408882375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3324 | 0.9065 | 0.9025 |
| 0.2589 | 2.0 | 500 | 0.2237 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mikegarts/distilgpt2-erichmariaremarque | e7f46cf69554ef4611d8198ba906ca96fe103958 | 2022-06-23T11:58:00.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | mikegarts | null | mikegarts/distilgpt2-erichmariaremarque | 17 | null | transformers | 9,113 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-erichmariaremarque
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-erichmariaremarque
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nvidia/stt_en_citrinet_1024_gamma_0_25 | 8248f8cf6658cc888da478950f885617469935cb | 2022-06-27T23:10:16.000Z | [
"nemo",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:Switchboard-1",
"dataset:WSJ-0",
"dataset:WSJ-1",
"dataset:National Singapore Corpus Part 1",
"dataset:National Singapore Corpus Part 6",
"arxiv:2104.01721",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Citrinet",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"Riva",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | nvidia | null | nvidia/stt_en_citrinet_1024_gamma_0_25 | 17 | 1 | nemo | 9,114 | ---
language:
- en
library_name: nemo
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National Singapore Corpus Part 1
- National Singapore Corpus Part 6
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Citrinet
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
- Riva
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_en_citrinet_1024_gamma_0_25
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.4
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.6
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Wall Street Journal 92
type: wsj_0
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.5
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Wall Street Journal 93
type: wsj_1
args:
language: en
metrics:
- name: Test WER
type: wer
value: 4.0
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: National Singapore Corpus
type: nsc_part_1
args:
language: en
metrics:
- name: Test WER
type: wer
value: 6.2
---
# NVIDIA Streaming Citrinet 1024 (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
This model transcribes speech in lowercase English alphabet including spaces and apostrophes, and is trained on several thousand hours of English speech data.
It is a non-autoregressive "large" variant of Streaming Citrinet, with around 140 million parameters.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-ctc) for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/stt_en_citrinet_1024_gamma_0_25")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_citrinet_1024_gamma_0_25"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 kHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Streaming Citrinet-1024 model is a non-autoregressive, streaming variant of Citrinet model [1] for Automatic Speech Recognition which uses CTC loss/decoding instead of Transducer. You may find more info on this model here: [Citrinet Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#citrinet).
## Training
The NeMo toolkit [3] was used for training the model for over several hundred epochs. This model was trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_ctc/speech_to_text_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/citrinet/citrinet_1024.yaml).
The tokenizer for this models was built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 |Train Dataset |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|---------|
| 1.0.0 | SentencePiece Unigram | 1024 | 7.6 | 3.4 | 2.5 | 4.0 | 6.2 | NeMo ASRSET 1.0 |
While deploying with [NVIDIA Riva](https://developer.nvidia.com/riva), you can combine this model with external language models to further improve WER. The WER(%) of the latest model with different language modeling techniques are reported in the following table.
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech that includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
KoichiYasuoka/deberta-base-japanese-wikipedia | e6917abca54a16ae4173b0869137384b298ba633 | 2022-07-23T14:43:45.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"wikipedia",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-wikipedia | 17 | null | transformers | 9,115 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
- "wikipedia"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-base-japanese-wikipedia
## Model Description
This is a DeBERTa(V2) model pre-trained on Japanese Wikipedia and 青空文庫 texts. You can fine-tune `deberta-base-japanese-wikipedia` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia")
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
cambridgeltl/mle_wikitext103 | 2359c00354df30e6290722e8746401bf1b957375 | 2022-06-26T13:47:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/mle_wikitext103 | 17 | null | transformers | 9,116 | Entry not found |
sohomghosh/LIPI_FinSim3_Hypernym | 8b77f2907c442cf7f29b374d00c7762ddbbde6d6 | 2022-06-28T06:54:42.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | sohomghosh | null | sohomghosh/LIPI_FinSim3_Hypernym | 17 | null | transformers | 9,117 | ---
license: mit
---
Note: This model may not perfectly replicate the numbers mentioned in the paper (\cite{chopra-ghosh-2021-term}) as unlike the original one it has been trained with lower batches sizes for fewer epochs.
The source code for training will son be made available in https://github.com/sohomghosh/FinSim_Financial_Hypernym_detection
```bibtex
@inproceedings{chopra-ghosh-2021-term,
title = "Term Expansion and {F}in{BERT} fine-tuning for Hypernym and Synonym Ranking of Financial Terms",
author = "Chopra, Ankush and
Ghosh, Sohom",
booktitle = "Proceedings of the Third Workshop on Financial Technology and Natural Language Processing (FinNLP@IJCAI 2021)",
month = "Aug ",
year = "2021",
address = "Online",
publisher = "-",
url = "https://aclanthology.org/2021.finnlp-1.8",
pages = "46--51",
}
```
Use the following code to import this in Transformers:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sohomghosh/LIPI_FinSim3_Hypernym")
model = AutoModel.from_pretrained("sohomghosh/LIPI_FinSim3_Hypernym")
#Using SentenceTransformers
from sentence_transformers import SentenceTransformer
model_finlipi = SentenceTransformer('sohomghosh/LIPI_FinSim3_Hypernym')
``` |
Chemsseddine/bert2gpt2_med_v4 | 767e8ea8e1d087a3295fd61700bb9843b3c11b4e | 2022-06-30T19:49:14.000Z | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Chemsseddine | null | Chemsseddine/bert2gpt2_med_v4 | 17 | null | transformers | 9,118 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2gpt2_med_v4
results: []
---
<img src="https://huggingface.co/Chemsseddine/bert2gpt2_med_ml_orange_summ-finetuned_med_sum_new-finetuned_med_sum_new/resolve/main/logobert2gpt2.png" alt="Map of positive probabilities per country." width="200"/>
# bert2gpt2_med_v4
This model is a fine-tuned version of [Chemsseddine/bert2gpt2_med_v3](https://huggingface.co/Chemsseddine/bert2gpt2_med_v3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4780
- Rouge1: 36.7502
- Rouge2: 18.5992
- Rougel: 36.2566
- Rougelsum: 36.161
- Gen Len: 22.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 169 | 1.4796 | 33.9893 | 16.2462 | 33.5685 | 33.4738 | 22.42 |
| No log | 2.0 | 338 | 1.4404 | 34.0811 | 16.219 | 34.0206 | 33.9139 | 22.76 |
| 1.0815 | 3.0 | 507 | 1.4078 | 35.2755 | 18.2266 | 34.9186 | 34.9052 | 22.63 |
| 1.0815 | 4.0 | 676 | 1.4207 | 34.0146 | 17.4167 | 33.9904 | 33.9735 | 22.92 |
| 1.0815 | 5.0 | 845 | 1.4285 | 35.2093 | 17.3269 | 35.1023 | 35.222 | 22.75 |
| 0.4699 | 6.0 | 1014 | 1.4607 | 34.5503 | 16.9067 | 34.6404 | 34.5957 | 22.8 |
| 0.4699 | 7.0 | 1183 | 1.4469 | 35.0539 | 17.0677 | 34.7607 | 34.8734 | 22.73 |
| 0.4699 | 8.0 | 1352 | 1.4632 | 35.2308 | 17.9663 | 35.1657 | 35.1012 | 22.9 |
| 0.2522 | 9.0 | 1521 | 1.4734 | 35.5699 | 18.53 | 35.4927 | 35.3747 | 22.84 |
| 0.2522 | 10.0 | 1690 | 1.4780 | 36.7502 | 18.5992 | 36.2566 | 36.161 | 22.96 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dwing/distilbert-base-uncased-finetuned-emotion | d242cc02714f34d36b2e7394bfab24e2cd3c1933 | 2022-07-01T13:38:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dwing | null | dwing/distilbert-base-uncased-finetuned-emotion | 17 | null | transformers | 9,119 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9335
- name: F1
type: f1
value: 0.9336729469235073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1616
- Accuracy: 0.9335
- F1: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1003 | 1.0 | 250 | 0.1854 | 0.931 | 0.9311 |
| 0.0891 | 2.0 | 500 | 0.1616 | 0.9335 | 0.9337 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nvidia/tts_en_fastpitch | ae570c549ccd53d2f4ac72c472517c242e99e11f | 2022-06-29T21:23:52.000Z | [
"nemo",
"en",
"dataset:ljspeech",
"arxiv:2006.06873",
"arxiv:2108.10447",
"text-to-speech",
"speech",
"audio",
"Transformer",
"pytorch",
"NeMo",
"Riva",
"license:cc-by-4.0"
] | text-to-speech | false | nvidia | null | nvidia/tts_en_fastpitch | 17 | 3 | nemo | 9,120 | ---
language:
- en
library_name: nemo
datasets:
- ljspeech
thumbnail: null
tags:
- text-to-speech
- speech
- audio
- Transformer
- pytorch
- NeMo
- Riva
license: cc-by-4.0
---
# NVIDIA FastPitch (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
FastPitch [1] is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration. Additionally, it uses an unsupervised speech-text aligner [2]. See the [model architecture](#model-architecture) section for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
Note: This model generates only spectrograms and a vocoder is needed to convert the spectrograms to waveforms.
In this example HiFiGAN is used.
```python
# Load FastPitch
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch")
# Load vocoder
from nemo.collections.tts.models import HifiGanModel
model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan")
```
### Generate audio
```python
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)
```
### Save the generated audio file
```python
# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').numpy(), 22050)
```
### Input
This model accepts batches of text.
### Output
This model generates mel spectrograms.
## Model Architecture
FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with a much higher real-time factor than Tacotron2 for the mel-spectrogram synthesis of a typical utterance. It uses an unsupervised speech-text aligner.
## Training
The NeMo toolkit [3] was used for training the models for 1000 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/fastpitch.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/fastpitch_align_v1.05.yaml).
### Datasets
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
## Performance
No performance information is available at this time.
## Limitations
This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [FastPitch: Parallel Text-to-speech with Pitch Prediction](https://arxiv.org/abs/2006.06873)
- [2] [One TTS Alignment To Rule Them All](https://arxiv.org/abs/2108.10447)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
dminiotas05/distilbert-base-uncased-finetuned-emotion | e3db9113caa03c7b7e8042d19774a16a9ecd903c | 2022-06-30T15:49:20.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-emotion | 17 | null | transformers | 9,121 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1027
- Accuracy: 0.5447
- F1: 0.4832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1848 | 1.0 | 188 | 1.1199 | 0.538 | 0.4607 |
| 1.0459 | 2.0 | 376 | 1.1027 | 0.5447 | 0.4832 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cookpad/mt5-base-indonesia-recipe-query-generation_v4 | 72515367d6ecb4986bed7d97b693e653058767ef | 2022-06-29T21:09:18.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cookpad | null | cookpad/mt5-base-indonesia-recipe-query-generation_v4 | 17 | null | transformers | 9,122 | Entry not found |
FabianWillner/bert-base-uncased-finetuned-triviaqa | a5c80af433cc43ecad5f203b15d01b34cbaf2bbe | 2022-06-30T16:21:05.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | FabianWillner | null | FabianWillner/bert-base-uncased-finetuned-triviaqa | 17 | null | transformers | 9,123 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-triviaqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-triviaqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9297 | 1.0 | 11195 | 0.9093 |
| 0.6872 | 2.0 | 22390 | 0.9252 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mmdjiji/gpt2-chinese-idioms | 7d0f3ae4ee241582a2a762dd69fcd337e0c64f2d | 2022-07-01T01:48:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:gpl-3.0"
] | text-generation | false | mmdjiji | null | mmdjiji/gpt2-chinese-idioms | 17 | null | transformers | 9,124 | ---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms).
|
Abdelmageed95/opt-350m-economy-data | 9322da16d4bbf8e256c204e614ae34bc3c1d28bf | 2022-07-02T15:12:37.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | Abdelmageed95 | null | Abdelmageed95/opt-350m-economy-data | 17 | null | transformers | 9,125 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-350m-economy-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-economy-data
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
srcocotero/RoBERTa-es-qa | 5dfb2e5a9f76d30152fcfd8db056ccea05bff8da | 2022-07-03T12:27:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad_es",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | srcocotero | null | srcocotero/RoBERTa-es-qa | 17 | null | transformers | 9,126 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_es
model-index:
- name: RoBERTa-es-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-es-qa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_es dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ajtamayoh/NLP-CIC-WFU_SocialDisNER_fine_tuned_NER_EHR_Spanish_model_Mulitlingual_BERT_v2 | ae41afea4dc25c93dde11b602c85edc206045f84 | 2022-07-04T22:11:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ajtamayoh | null | ajtamayoh/NLP-CIC-WFU_SocialDisNER_fine_tuned_NER_EHR_Spanish_model_Mulitlingual_BERT_v2 | 17 | null | transformers | 9,127 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_SocialDisNER_fine_tuned_NER_EHR_Spanish_model_Mulitlingual_BERT_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_SocialDisNER_fine_tuned_NER_EHR_Spanish_model_Mulitlingual_BERT_v2
This model is a fine-tuned version of [ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT](https://huggingface.co/ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1483
- Precision: 0.8699
- Recall: 0.8722
- F1: 0.8711
- Accuracy: 0.9771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 467 | 0.0851 | 0.8415 | 0.8209 | 0.8310 | 0.9720 |
| 0.1011 | 2.0 | 934 | 0.1034 | 0.8681 | 0.8464 | 0.8571 | 0.9744 |
| 0.0537 | 3.0 | 1401 | 0.1094 | 0.8527 | 0.8608 | 0.8568 | 0.9753 |
| 0.0335 | 4.0 | 1868 | 0.1239 | 0.8617 | 0.8603 | 0.8610 | 0.9751 |
| 0.0185 | 5.0 | 2335 | 0.1192 | 0.8689 | 0.8627 | 0.8658 | 0.9756 |
| 0.0112 | 6.0 | 2802 | 0.1426 | 0.8672 | 0.8663 | 0.8667 | 0.9765 |
| 0.0067 | 7.0 | 3269 | 0.1483 | 0.8699 | 0.8722 | 0.8711 | 0.9771 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Bismi/t5_squad | 32877bb4367a0224e84f1978e12bd87c0ccd6016 | 2022-07-05T05:11:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bismi | null | Bismi/t5_squad | 17 | null | transformers | 9,128 | Entry not found |
shubhamitra/distilbert-base-uncased-finetuned-toxic-classification | 16d08958bda0f80f0111461fe8b47c778369ac92 | 2022-07-05T05:57:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | shubhamitra | null | shubhamitra/distilbert-base-uncased-finetuned-toxic-classification | 17 | null | transformers | 9,129 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-toxic-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-toxic-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 123
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 498 | 0.0419 | 0.7754 | 0.8736 | 0.9235 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
leminhds/distilbert-base-uncased-finetuned-emotion | c74b91253e5acdf472e4bf39f1f95720e696c8d9 | 2022-07-26T20:50:41.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | leminhds | null | leminhds/distilbert-base-uncased-finetuned-emotion | 17 | null | transformers | 9,130 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1677
- eval_accuracy: 0.924
- eval_f1: 0.9238
- eval_runtime: 2.5188
- eval_samples_per_second: 794.026
- eval_steps_per_second: 12.704
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
upsalite/xlm-roberta-base-finetuned-emotion-2-labels | 9b0e1989f0cff43dd8628fc538577fd7d217f143 | 2022-07-13T09:56:05.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | upsalite | null | upsalite/xlm-roberta-base-finetuned-emotion-2-labels | 17 | null | transformers | 9,131 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-emotion-2-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-emotion-2-labels
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1200
- Accuracy: 0.835
- F1: 0.8335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6973 | 1.0 | 25 | 0.6917 | 0.5 | 0.3333 |
| 0.6626 | 2.0 | 50 | 0.5690 | 0.745 | 0.7431 |
| 0.5392 | 3.0 | 75 | 0.4598 | 0.76 | 0.7591 |
| 0.4253 | 4.0 | 100 | 0.4313 | 0.8 | 0.7993 |
| 0.2973 | 5.0 | 125 | 0.5872 | 0.795 | 0.7906 |
| 0.2327 | 6.0 | 150 | 0.4951 | 0.805 | 0.8049 |
| 0.173 | 7.0 | 175 | 0.6095 | 0.815 | 0.8142 |
| 0.1159 | 8.0 | 200 | 0.6523 | 0.825 | 0.8246 |
| 0.0791 | 9.0 | 225 | 0.6651 | 0.825 | 0.8243 |
| 0.0557 | 10.0 | 250 | 0.8242 | 0.83 | 0.8286 |
| 0.0643 | 11.0 | 275 | 0.6710 | 0.825 | 0.8243 |
| 0.0507 | 12.0 | 300 | 0.7729 | 0.83 | 0.8294 |
| 0.0239 | 13.0 | 325 | 0.8618 | 0.83 | 0.8283 |
| 0.0107 | 14.0 | 350 | 0.9683 | 0.835 | 0.8335 |
| 0.0233 | 15.0 | 375 | 1.0850 | 0.825 | 0.8227 |
| 0.0134 | 16.0 | 400 | 0.9801 | 0.835 | 0.8343 |
| 0.0122 | 17.0 | 425 | 1.0427 | 0.845 | 0.8439 |
| 0.0046 | 18.0 | 450 | 1.0867 | 0.84 | 0.8387 |
| 0.0038 | 19.0 | 475 | 1.0950 | 0.83 | 0.8289 |
| 0.002 | 20.0 | 500 | 1.1200 | 0.835 | 0.8335 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
michauhl/distilbert-base-uncased-finetuned-emotion | 060607007290789050cb1047e6d0b707de1e28ad | 2022-07-13T12:57:33.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | michauhl | null | michauhl/distilbert-base-uncased-finetuned-emotion | 17 | null | transformers | 9,132 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9404976918144629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.9405
- F1: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1344 | 1.0 | 1000 | 0.1760 | 0.933 | 0.9331 |
| 0.0823 | 2.0 | 2000 | 0.1891 | 0.9405 | 0.9405 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.11.0
|
SkolkovoInstitute/GenChal_2022_nigula | abb178538d7faf355bedb1fc69bddabb1ce5bbdb | 2022-07-11T11:22:22.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"transformers",
"feedback comment generation for writing learning",
"autotrain_compatible"
] | text2text-generation | false | SkolkovoInstitute | null | SkolkovoInstitute/GenChal_2022_nigula | 17 | null | transformers | 9,133 | ---
language:
- en
tags:
- feedback comment generation for writing learning
licenses:
- cc-by-nc-sa
---
## Model overview
This model was trained in terms of [GenChal 2022: Feedback Comment Generation for Writing Learning](https://fcg.sharedtask.org/) shared task
In this task, the model gets the string with text with the error and the exact span of the error and should return the comment in natural language, which explains the nature of the error.
## Model training details
#### Data
The data was provided in the following way
```
input sentence [\t] offset range [\t] feedback comment
```
Here are some examples
```
The smoke flow my face . 10:17 When the <verb> <<flow>> is used as an <intransitive verb> to express ''to move in a stream'', a <preposition> needs to be placed to indicate the direction. 'To' and 'towards' are <prepositions> that indicate direction.
I want to stop smoking during driving bicycle . 23:29 A <gerund> does not normally follow the <preposition> <<during>>. Think of an expression using the <conjunction> 'while' instead of a <preposition>.
```
Grammar termins are highlighted with '< ... >' marks and word examples - with '<< ... >>'
#### Data preprocessing
We lowercased the text, split it from any punctuation, including task specific marks (<< >>) and explicitly pointed out the error in the original text using << >>.
```
the smoke < < flow > > < < my > > face . 10:17 When the < verb > < < flow > > is used as an < intransitive verb > to express '' to move in a stream '', a < preposition > needs to be placed to indicate the direction. ' to ' and ' towards ' are < prepositions > that indicate direction .
i want to stop smoking < < during > > driving bicycle . 23:29 a < gerund > does not normally follow the < preposition > < < during > > . think of an expression using the < conjunction > ' while ' instead of a < preposition > .
```
#### Data augmentation
The main feature of our training pipeline was data augmentation. The idea of the augmentation is as follows: we cut the existing text with error after the last word which was syntactically connected to the words inside the error span (syntactic dependencies were automatically parsed with spacy) and this cut version of the text with error was used as a prompt for language model (we used [GPT-Neo 1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B)).
Using both initial and augmented data we fine-tuned [t5-large](https://huggingface.co/t5-large).
## How to use
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
text_with_error = 'I want to stop smoking during driving bicycle .'
error_span = '23:29'
off1, off2 = list(map(int,error_span.split(":")))
text_with_error_pointed = text_with_error [:off1] + "< < " + re.sub("\s+", " > > < < ", text_with_error [off1:off2].strip()) + " > > " + text_with_error[off2:]
text_with_error_pointed = re.sub("\s+", " ", text_with_error_pointed .strip()).lower()
tokenizer = AutoTokenizer.from_pretrained("SkolkovoInstitute/GenChal_2022_nigula")
model = T5ForConditionalGeneration.from_pretrained("SkolkovoInstitute/GenChal_2022_nigula").cuda();
model.eval();
def paraphrase(text, model, temperature=1.0, beams=3):
texts = [text] if isinstance(text, str) else text
inputs = tokenizer(texts, return_tensors='pt', padding=True)['input_ids'].to(model.device)
result = model.generate(
inputs,
# num_return_sequences=n or 1,
do_sample=False,
temperature=temperature,
repetition_penalty=1.1,
max_length=int(inputs.shape[1] * 3) ,
# bad_words_ids=[[2]], # unk
num_beams=beams,
)
texts = [tokenizer.decode(r, skip_special_tokens=True) for r in result]
if isinstance(text, str):
return texts[0]
return texts
paraphrase([pointed_example], model)
# expected output: ["a gerund > does not normally follow the preposition > during > >. think of an expression using the conjunction >'while'instead of a preposition >."]
```
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner | 13e9c2a21b0d892309d5187cfc4622c391a7795b | 2022-07-11T17:29:06.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:toydata",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | kinanmartin | null | kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner | 17 | null | transformers | 9,134 | ---
tags:
- generated_from_trainer
datasets:
- toydata
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large-ner-hrl-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: toydata
type: toydata
args: SDN
metrics:
- name: Precision
type: precision
value: 0.9132452695465905
- name: Recall
type: recall
value: 0.9205854126679462
- name: F1
type: f1
value: 0.9169006511739053
- name: Accuracy
type: accuracy
value: 0.9784804945824268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-ner-hrl-finetuned-ner
This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0944
- Precision: 0.9132
- Recall: 0.9206
- F1: 0.9169
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 408 | 0.0900 | 0.8508 | 0.9303 | 0.8888 | 0.9719 |
| 0.1087 | 2.0 | 816 | 0.0827 | 0.9043 | 0.9230 | 0.9136 | 0.9783 |
| 0.0503 | 3.0 | 1224 | 0.0944 | 0.9132 | 0.9206 | 0.9169 | 0.9785 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nloc2578/new_ans1 | 72067b19f52d42ca8a919243d1c39a4158d4fb33 | 2022-07-11T21:36:56.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | nloc2578 | null | nloc2578/new_ans1 | 17 | null | transformers | 9,135 | ---
tags:
- generated_from_trainer
model-index:
- name: new_ans1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_ans1
This model is a fine-tuned version of [nloc2578/new1](https://huggingface.co/nloc2578/new1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
murtaza-jafri/DialoGPT-medium-Joshua | 36d8ecd55e2a9bd376db423e6f2298da34b644a6 | 2022-07-12T10:52:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | murtaza-jafri | null | murtaza-jafri/DialoGPT-medium-Joshua | 17 | null | transformers | 9,136 | Entry not found |
Evelyn18/distilbert-base-uncased-becasv3-1 | 745312889cac361ddd706beeeb93d3c01a6986fa | 2022-07-13T18:13:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv3",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becasv3-1 | 17 | null | transformers | 9,137 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv3
model-index:
- name: distilbert-base-uncased-becasv3-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv3-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 5.1063 |
| No log | 2.0 | 16 | 4.4615 |
| No log | 3.0 | 24 | 3.9351 |
| No log | 4.0 | 32 | 3.5490 |
| No log | 5.0 | 40 | 3.3299 |
| No log | 6.0 | 48 | 3.2148 |
| No log | 7.0 | 56 | 3.1292 |
| No log | 8.0 | 64 | 3.1086 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abdulmatinomotoso/headline_generator | f1befd008c4dd15630d779fab7bcee04c6350c9d | 2022-07-13T10:57:59.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | abdulmatinomotoso | null | abdulmatinomotoso/headline_generator | 17 | null | transformers | 9,138 | ---
tags:
- generated_from_trainer
model-index:
- name: headline_generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# headline_generator
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5368 | 0.57 | 500 | 0.3298 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jhonparra18/bert-base-cased-fine-tuning-cvs-hf-studio-name | a9c8183b916c7d9fe6b37eb42365705d6bd2f8ab | 2022-07-16T02:44:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/bert-base-cased-fine-tuning-cvs-hf-studio-name | 17 | null | transformers | 9,139 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-base-cased-fine-tuning-cvs-hf-studio-name
results: []
widget:
- text: "Egresado de la carrera Ingeniería en Computación Conocimientos de lenguajes HTML, CSS, Javascript y MySQL. Experiencia trabajando en ámbitos de redes de pequeña y mediana escala. Inglés Hablado nivel básico, escrito nivel intermedio.HTML, CSS y JavaScript. Realidad aumentada. Lenguaje R. HTML5, JavaScript y Nodejs"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-fine-tuning-cvs-hf-studio-name
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2601
- Accuracy: 0.6500
- F1: 0.6500
- Precision: 0.6500
- Recall: 0.6500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.4407 | 0.24 | 500 | 1.5664 | 0.5528 | 0.5528 | 0.5528 | 0.5528 |
| 1.3055 | 0.49 | 1000 | 1.4891 | 0.5745 | 0.5745 | 0.5745 | 0.5745 |
| 1.373 | 0.73 | 1500 | 1.3634 | 0.6180 | 0.6180 | 0.6180 | 0.6180 |
| 1.3621 | 0.98 | 2000 | 1.3768 | 0.6139 | 0.6139 | 0.6139 | 0.6139 |
| 1.1677 | 1.22 | 2500 | 1.3330 | 0.6395 | 0.6395 | 0.6395 | 0.6395 |
| 1.0826 | 1.47 | 3000 | 1.4003 | 0.6146 | 0.6146 | 0.6146 | 0.6146 |
| 1.0968 | 1.71 | 3500 | 1.2601 | 0.6500 | 0.6500 | 0.6500 | 0.6500 |
| 1.0896 | 1.96 | 4000 | 1.2826 | 0.6564 | 0.6564 | 0.6564 | 0.6564 |
| 0.8572 | 2.2 | 4500 | 1.3254 | 0.6569 | 0.6569 | 0.6569 | 0.6569 |
| 0.822 | 2.44 | 5000 | 1.3024 | 0.6571 | 0.6571 | 0.6571 | 0.6571 |
| 0.8022 | 2.69 | 5500 | 1.2971 | 0.6608 | 0.6608 | 0.6608 | 0.6608 |
| 0.834 | 2.93 | 6000 | 1.2900 | 0.6630 | 0.6630 | 0.6630 | 0.6630 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.8.2+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
juliensimon/distilbert-imdb-mlflow | e8b279db024af702e76721345effa060378ba71d | 2022-07-15T13:04:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | juliensimon | null | juliensimon/distilbert-imdb-mlflow | 17 | null | transformers | 9,140 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb-mlflow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb-mlflow
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the imdb dataset.
MLflow logs are included. To visualize them, just clone the repo and run :
```
mlflow ui
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jhonparra18/xlm-roberta-base-cv-studio_name-pooler | 484d34a6c7ebde2251c2406030f0345562244744 | 2022-07-26T19:15:36.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jhonparra18 | null | jhonparra18/xlm-roberta-base-cv-studio_name-pooler | 17 | null | transformers | 9,141 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-cv-studio_name-pooler
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-cv-studio_name-pooler
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1268
- Accuracy: 0.6758
- F1 Micro: 0.6758
- F1 Macro: 0.3836
- Precision Micro: 0.6758
- Recall Micro: 0.6758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:------------:|
| 2.0159 | 1.59 | 1000 | 1.6867 | 0.4801 | 0.4801 | 0.1389 | 0.4801 | 0.4801 |
| 1.5921 | 3.18 | 2000 | 1.4606 | 0.5342 | 0.5342 | 0.2238 | 0.5342 | 0.5342 |
| 1.447 | 4.77 | 3000 | 1.3170 | 0.6114 | 0.6114 | 0.3021 | 0.6114 | 0.6114 |
| 1.3441 | 6.36 | 4000 | 1.2777 | 0.6181 | 0.6181 | 0.3264 | 0.6181 | 0.6181 |
| 1.2847 | 7.95 | 5000 | 1.1902 | 0.6555 | 0.6555 | 0.3495 | 0.6555 | 0.6555 |
| 1.2296 | 9.54 | 6000 | 1.1867 | 0.6635 | 0.6635 | 0.3608 | 0.6635 | 0.6635 |
| 1.1965 | 11.13 | 7000 | 1.1426 | 0.6683 | 0.6683 | 0.3728 | 0.6683 | 0.6683 |
| 1.1547 | 12.72 | 8000 | 1.1419 | 0.6687 | 0.6687 | 0.3743 | 0.6687 | 0.6687 |
| 1.1677 | 14.31 | 9000 | 1.1268 | 0.6758 | 0.6758 | 0.3836 | 0.6758 | 0.6758 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-sexism-out-of-sample-test-opt-EN | 23e08ae47a484b4c6a64df913e9eedf62f5a4952 | 2022-07-26T14:17:40.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | annahaz | null | annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-sexism-out-of-sample-test-opt-EN | 17 | null | transformers | 9,142 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-multilingual-cased-finetuned-misogyny-sexism-out-of-sample-test-opt-EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-misogyny-sexism-out-of-sample-test-opt-EN
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0225
- Accuracy: 0.8312
- F1: 0.3169
- Precision: 0.2393
- Recall: 0.4689
- Mae: 0.1688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3654 | 1.0 | 2395 | 0.3117 | 0.8590 | 0.3599 | 0.2898 | 0.4747 | 0.1410 |
| 0.3289 | 2.0 | 4790 | 0.4348 | 0.7582 | 0.3375 | 0.2188 | 0.7374 | 0.2418 |
| 0.2615 | 3.0 | 7185 | 0.3927 | 0.8425 | 0.3448 | 0.2642 | 0.4961 | 0.1575 |
| 0.1982 | 4.0 | 9580 | 0.8717 | 0.7641 | 0.3170 | 0.2091 | 0.6556 | 0.2359 |
| 0.176 | 5.0 | 11975 | 0.5349 | 0.8573 | 0.3328 | 0.2731 | 0.4261 | 0.1427 |
| 0.1422 | 6.0 | 14370 | 0.8742 | 0.8130 | 0.3062 | 0.2218 | 0.4942 | 0.1870 |
| 0.1161 | 7.0 | 16765 | 0.7036 | 0.8700 | 0.2870 | 0.2648 | 0.3132 | 0.1300 |
| 0.1218 | 8.0 | 19160 | 0.7793 | 0.8512 | 0.3113 | 0.2537 | 0.4027 | 0.1488 |
| 0.0991 | 9.0 | 21555 | 0.8698 | 0.8518 | 0.3153 | 0.2567 | 0.4086 | 0.1482 |
| 0.092 | 10.0 | 23950 | 1.0225 | 0.8312 | 0.3169 | 0.2393 | 0.4689 | 0.1688 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
YumeAyasaki/DialoGPT-small-rubybot | 8dce32ec7182c95cc7b2470d4ba9f5aebe330f2e | 2022-07-16T05:47:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | YumeAyasaki | null | YumeAyasaki/DialoGPT-small-rubybot | 17 | null | transformers | 9,143 | ---
tags:
- conversational
---
# Nothing nothing... |
fadhilarkn/distilbert-base-uncased-finetuned-ner | 61fd0cac02cd581d88982d76eb7572a11bc40f84 | 2022-07-17T09:45:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | fadhilarkn | null | fadhilarkn/distilbert-base-uncased-finetuned-ner | 17 | null | transformers | 9,144 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9276948590381426
- name: Recall
type: recall
value: 0.9386956035350711
- name: F1
type: f1
value: 0.9331628113879005
- name: Accuracy
type: accuracy
value: 0.9842883695807584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0574
- Precision: 0.9277
- Recall: 0.9387
- F1: 0.9332
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2384 | 1.0 | 878 | 0.0701 | 0.9130 | 0.9220 | 0.9175 | 0.9803 |
| 0.0494 | 2.0 | 1756 | 0.0593 | 0.9222 | 0.9314 | 0.9268 | 0.9829 |
| 0.0301 | 3.0 | 2634 | 0.0574 | 0.9277 | 0.9387 | 0.9332 | 0.9843 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
khosseini/bert_1875_1890 | 0b0af9f9ac16470ccbcb030a3e25ffdce54d63e2 | 2022-07-18T09:37:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | khosseini | null | khosseini/bert_1875_1890 | 17 | null | transformers | 9,145 | # Neural Language Models for Nineteenth-Century English: bert_1875_1890
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1875-1890 and comprised of ~1.3 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
khosseini/bert_1890_1900 | e077ccac28833a7356a55c468fee93d2f3f454dd | 2022-07-18T09:41:11.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | khosseini | null | khosseini/bert_1890_1900 | 17 | 0 | transformers | 9,146 | # Neural Language Models for Nineteenth-Century English: bert_1890_1900
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1890-1900 and comprised of ~1.1 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
Evelyn18/roberta-base-spanish-squades-modelo-robertav0 | 270cfad02779c27eb491232757663e7e2e6bbab2 | 2022-07-18T16:01:20.000Z | [
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/roberta-base-spanish-squades-modelo-robertav0 | 17 | null | transformers | 9,147 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-modelo-robertav0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo-robertav0
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 2.1175 |
| No log | 2.0 | 12 | 1.7427 |
| No log | 3.0 | 18 | 2.0810 |
| No log | 4.0 | 24 | 2.3820 |
| No log | 5.0 | 30 | 2.5007 |
| No log | 6.0 | 36 | 2.6782 |
| No log | 7.0 | 42 | 2.7578 |
| No log | 8.0 | 48 | 2.7703 |
| No log | 9.0 | 54 | 2.7654 |
| No log | 10.0 | 60 | 2.7628 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jordyvl/bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner | 6f93ea9c57bc55dc32cca2f40e6a5405b098a4af | 2022-07-19T15:32:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:harem",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | null | false | jordyvl | null | jordyvl/bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner | 17 | null | transformers | 9,148 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- harem
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the harem dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0687
- Precision: 0.8030
- Recall: 0.8933
- F1: 0.8457
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0646 | 1.0 | 2517 | 0.0924 | 0.7822 | 0.8876 | 0.8316 | 0.9670 |
| 0.0263 | 2.0 | 5034 | 0.0644 | 0.7598 | 0.8708 | 0.8115 | 0.9685 |
| 0.0234 | 3.0 | 7551 | 0.0687 | 0.8030 | 0.8933 | 0.8457 | 0.9748 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kabelomalapane/En-Nso_update2 | 59c5b5e6361b1ff87b0f99e2d5e8e6aeeb55cd0f | 2022-07-20T18:37:27.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/En-Nso_update2 | 17 | null | transformers | 9,149 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Nso_update2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Nso_update2
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-nso](https://huggingface.co/Helsinki-NLP/opus-mt-en-nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4199
- Bleu: 24.4776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 3.6661 | 1.0 | 865 | 3.0081 | 17.6871 |
| 2.7495 | 2.0 | 1730 | 2.7725 | 20.1475 |
| 2.4533 | 3.0 | 2595 | 2.6433 | 22.5433 |
| 2.3203 | 4.0 | 3460 | 2.5625 | 22.9963 |
| 2.1356 | 5.0 | 4325 | 2.5190 | 23.5696 |
| 2.0258 | 6.0 | 5190 | 2.4881 | 23.8367 |
| 1.9481 | 7.0 | 6055 | 2.4641 | 24.0611 |
| 1.8769 | 8.0 | 6920 | 2.4526 | 24.3214 |
| 1.8211 | 9.0 | 7785 | 2.4392 | 24.5300 |
| 1.7689 | 10.0 | 8650 | 2.4307 | 24.4627 |
| 1.7314 | 11.0 | 9515 | 2.4254 | 24.4936 |
| 1.7 | 12.0 | 10380 | 2.4243 | 24.4673 |
| 1.6695 | 13.0 | 11245 | 2.4202 | 24.5613 |
| 1.6562 | 14.0 | 12110 | 2.4200 | 24.4886 |
| 1.6446 | 15.0 | 12975 | 2.4199 | 24.4711 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
acho0057/sentiment_analysis_custom | f981b2566fd6b44d66be482d48a4ba8c7f2b4c93 | 2022-07-22T08:51:18.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | acho0057 | null | acho0057/sentiment_analysis_custom | 17 | null | transformers | 9,150 | |
huggingtweets/luciengreaves-pontifex | 7d3ac0d21352d63d2ec67a6f3200997833633792 | 2022-07-23T00:00:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/luciengreaves-pontifex | 17 | 1 | transformers | 9,151 | ---
language: en
thumbnail: http://www.huggingtweets.com/luciengreaves-pontifex/1658534403996/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/666311094256971779/rhb7qkCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/507818066814590976/KNG-IkT9_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucien Greaves & Pope Francis</div>
<div style="text-align: center; font-size: 14px;">@luciengreaves-pontifex</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucien Greaves & Pope Francis.
| Data | Lucien Greaves | Pope Francis |
| --- | --- | --- |
| Tweets downloaded | 3197 | 3250 |
| Retweets | 536 | 0 |
| Short tweets | 379 | 103 |
| Tweets kept | 2282 | 3147 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q0nkdf60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luciengreaves-pontifex's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2y98dgmx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2y98dgmx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/luciengreaves-pontifex')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Intel/bert-base-uncased-squad-int8-static | de75d12ae3543c0564565b38ec9f4163b13b7417 | 2022-07-25T02:42:58.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | Intel | null | Intel/bert-base-uncased-squad-int8-static | 17 | null | transformers | 9,152 | ---
license: apache-2.0
tags:
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- squad
metrics:
- f1
---
# INT8 BERT base uncased finetuned on Squad
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [jimypbr/bert-base-uncased-squad](https://huggingface.co/jimypbr/bert-base-uncased-squad).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
The linear modules **bert.encoder.layer.2.intermediate.dense**, **bert.encoder.layer.4.intermediate.dense**, **bert.encoder.layer.9.output.dense**, **bert.encoder.layer.10.output.dense** fall back to fp32 to meet the 1% relative accuracy loss.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |87.3006|88.1030|
| **Model size (MB)** |139|436|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/bert-base-uncased-squad-int8-static',
)
```
|
doya/klue-sentiment-nsmc | e78d67b0dd3b4260a74806e934cc0adde05ef3ef | 2022-07-25T07:30:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | doya | null | doya/klue-sentiment-nsmc | 17 | null | transformers | 9,153 | Entry not found |
BramVanroy/robbert-v2-dutch-base-hebban-reviews | f28d32f1c459cb7879df74ef865ce98970384a24 | 2022-07-29T09:39:53.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"nl",
"dataset:BramVanroy/hebban-reviews",
"transformers",
"sentiment-analysis",
"dutch",
"text",
"license:mit",
"model-index"
] | text-classification | false | BramVanroy | null | BramVanroy/robbert-v2-dutch-base-hebban-reviews | 17 | null | transformers | 9,154 | ---
datasets:
- BramVanroy/hebban-reviews
language:
- nl
license: mit
metrics:
- accuracy
- f1
- precision
- qwk
- recall
model-index:
- name: robbert-v2-dutch-base-hebban-reviews
results:
- dataset:
config: filtered_sentiment
name: BramVanroy/hebban-reviews - filtered_sentiment - 2.0.0
revision: 2.0.0
split: test
type: BramVanroy/hebban-reviews
metrics:
- name: Test accuracy
type: accuracy
value: 0.8070512820512821
- name: Test f1
type: f1
value: 0.8144966061997005
- name: Test precision
type: precision
value: 0.8275999429062602
- name: Test qwk
type: qwk
value: 0.7336245557372719
- name: Test recall
type: recall
value: 0.8070512820512821
task:
name: sentiment analysis
type: text-classification
tags:
- sentiment-analysis
- dutch
- text
widget:
- text: Wauw, wat een leuk boek! Ik heb me er er goed mee vermaakt.
- text: Nee, deze vond ik niet goed. De auteur doet zijn best om je als lezer mee
te trekken in het verhaal maar mij overtuigt het alleszins niet.
- text: Ik vind het niet slecht maar de schrijfstijl trekt me ook niet echt aan. Het
wordt een beetje saai vanaf het vijfde hoofdstuk
---
# robbert-v2-dutch-base-hebban-reviews
# Dataset
- dataset_name: BramVanroy/hebban-reviews
- dataset_config: filtered_sentiment
- dataset_revision: 2.0.0
- labelcolumn: review_sentiment
- textcolumn: review_text_without_quotes
# Training
- optim: adamw_hf
- learning_rate: 5e-05
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- gradient_accumulation_steps: 1
- max_steps: 5001
- save_steps: 500
- metric_for_best_model: qwk
# Best checkedpoint based on validation
- best_metric: 0.7412639349881154
- best_model_checkpoint: trained/hebban-reviews/robbert-v2-dutch-base/checkpoint-3500
# Test results of best checkpoint
- accuracy: 0.8070512820512821
- f1: 0.8144966061997005
- precision: 0.8275999429062602
- qwk: 0.7336245557372719
- recall: 0.8070512820512821
## Confusion matric

## Normalized confusion matrix

# Environment
- cuda_capabilities: 8.0; 8.0
- cuda_device_count: 2
- cuda_devices: NVIDIA A100-SXM4-80GB; NVIDIA A100-SXM4-80GB
- finetuner_commit: 66294c815326c93682003119534cb72009f558c2
- platform: Linux-4.18.0-305.49.1.el8_4.x86_64-x86_64-with-glibc2.28
- python_version: 3.9.5
- toch_version: 1.10.0
- transformers_version: 4.21.0
|
huggingtweets/acrasials_art | f1d7e189e7b4631ce3788585a1666a45f46edd8b | 2022-07-26T14:30:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/acrasials_art | 17 | null | transformers | 9,155 | ---
language: en
thumbnail: http://www.huggingtweets.com/acrasials_art/1658845828038/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459339266060918789/mjxa2TwP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Acrasial! 🫡</div>
<div style="text-align: center; font-size: 14px;">@acrasials_art</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Acrasial! 🫡.
| Data | Acrasial! 🫡 |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 1321 |
| Short tweets | 492 |
| Tweets kept | 1422 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3imbmus0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @acrasials_art's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/asit6thi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/asit6thi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/acrasials_art')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AbidHasan95/movieHunt4-ner | 65203f8a3c7701f8ca1dac9a9bfcad9cadec36e6 | 2022-07-29T09:53:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | AbidHasan95 | null | AbidHasan95/movieHunt4-ner | 17 | null | transformers | 9,156 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: movieHunt4-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movieHunt4-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.0284 | 0.9959 | 0.9959 | 0.9959 | 0.9974 |
| No log | 2.0 | 96 | 0.0060 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 144 | 0.0034 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 4.0 | 192 | 0.0025 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 5.0 | 240 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 6.0 | 288 | 0.0016 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 7.0 | 336 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 8.0 | 384 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 9.0 | 432 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 10.0 | 480 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 11.0 | 528 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 12.0 | 576 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 13.0 | 624 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 14.0 | 672 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 15.0 | 720 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 16.0 | 768 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 17.0 | 816 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 18.0 | 864 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 19.0 | 912 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0168 | 20.0 | 960 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 21.0 | 1008 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 22.0 | 1056 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 23.0 | 1104 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 24.0 | 1152 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 25.0 | 1200 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 26.0 | 1248 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 27.0 | 1296 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 28.0 | 1344 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 29.0 | 1392 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 30.0 | 1440 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/Flowberta | de72e75266e7ef4159d6332980a3cae772942c92 | 2021-06-28T19:13:14.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/Flowberta | 16 | null | transformers | 9,157 | Entry not found |
BigSalmon/MrLincolnBerta | 3e92eea4fdc7a93fbd5cf93ee75643ffa6bd7b17 | 2021-12-24T21:54:31.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/MrLincolnBerta | 16 | null | transformers | 9,158 | Example Prompt:
```
informal english: things are better when they are open source, because they are constantly being updated to enhance experience.
Translated into the Style of Abraham Lincoln: in the open-source paradigm, code is ( ceaselessly / perpetually ) being ( reengineered / revamped / polished ), thereby ( advancing / enhancing / optimizing / <mask> ) the user experience.
```
Demo: https://huggingface.co/spaces/BigSalmon/MASK2 |
Brian-M-Collins/article-twitter-summarisation-1 | 641a3204fdc60f4eb92215514445f8a85f2443c2 | 2022-02-22T16:37:46.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Brian-M-Collins/autonlp-data-abstract-twitter-summarisation",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | false | Brian-M-Collins | null | Brian-M-Collins/article-twitter-summarisation-1 | 16 | 1 | transformers | 9,159 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Brian-M-Collins/autonlp-data-abstract-twitter-summarisation
co2_eq_emissions: 122.0276344811897
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 588216503
- CO2 Emissions (in grams): 122.0276344811897
## Validation Metrics
- Loss: 1.2534778118133545
- Rouge1: 67.1992
- Rouge2: 58.3369
- RougeL: 64.0987
- RougeLsum: 65.2118
- Gen Len: 37.4152
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Brian-M-Collins/autonlp-abstract-twitter-summarisation-588216503
``` |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | c97c040fdfe8c39453d6425d0dbe7c675161de1e | 2021-10-18T09:58:40.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | 16 | null | transformers | 9,160 | ---
language:
- ar
license: apache-2.0
widget:
- text: 'شلونك ؟ شخبارك ؟'
---
# CAMeLBERT-DA POS-GLF Model
## Model description
**CAMeLBERT-DA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf')
>>> text = 'شلونك ؟ شخبارك ؟'
>>> pos(text)
[{'entity': 'noun', 'score': 0.84596395, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.7230489, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.99996364, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9990874, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99985224, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9988868, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999683, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
CZWin32768/xlm-align | 6b7a0812d30b3fe2c78fc49d91aa234d969b7bc7 | 2021-07-21T07:53:29.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:2106.06381",
"transformers",
"autotrain_compatible"
] | fill-mask | false | CZWin32768 | null | CZWin32768/xlm-align | 16 | null | transformers | 9,161 | # XLM-Align
**Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment** (ACL-2021, [paper](https://arxiv.org/pdf/2106.06381.pdf), [github](https://github.com/CZWin32768/XLM-Align))
XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our [paper](https://arxiv.org/pdf/2106.06381.pdf).
## Example
```
model = = AutoModel.from_pretrained("CZWin32768/xlm-align")
```
## Evaluation Results
XTREME cross-lingual understanding tasks:
| Model | POS | NER | XQuAD | MLQA | TyDiQA | XNLI | PAWS-X | Avg |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| XLM-R_base | 75.6 | 61.8 | 71.9 / 56.4 | 65.1 / 47.2 | 55.4 / 38.3 | 75.0 | 84.9 | 66.4 |
| XLM-Align | **76.0** | **63.7** | **74.7 / 59.0** | **68.1 / 49.8** | **62.1 / 44.8** | **76.2** | **86.8** | **68.9** |
## MD5
```
b9d214025837250ede2f69c9385f812c config.json
6005db708eb4bab5b85fa3976b9db85b pytorch_model.bin
bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
eedbd60a7268b9fc45981b849664f747 tokenizer.json
```
## About
Contact: chizewen\@outlook.com
BibTeX:
```
@article{xlmalign,
title={Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment},
author={Zewen Chi and Li Dong and Bo Zheng and Shaohan Huang and Xian-Ling Mao and Heyan Huang and Furu Wei},
journal={arXiv preprint arXiv:2106.06381},
year={2021}
}
``` |
DataikuNLP/paraphrase-albert-small-v2 | 94e7d607209b5e81aedd815066e084d7b1e227cd | 2021-09-01T13:30:27.000Z | [
"pytorch",
"albert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | DataikuNLP | null | DataikuNLP/paraphrase-albert-small-v2 | 16 | null | sentence-transformers | 9,162 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# DataikuNLP/paraphrase-albert-small-v2
**This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2/) from sentence-transformers at the specific commit `1eb1996223dd90a4c25be2fc52f6f336419a0d52`.**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
Davlan/xlm-roberta-base-finetuned-igbo | 0c10763e2cf85e0e393ea7a9ccf1365ff9dcc0c9 | 2021-06-06T20:13:58.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"ig",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-igbo | 16 | null | transformers | 9,163 | Hugging Face's logo
---
language: ig
datasets:
---
# xlm-roberta-base-finetuned-igbo
## Model description
**xlm-roberta-base-finetuned-igbo** is a **Igbo RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Hausa language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Igbo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-igbo')
>>> unmasker("Reno Omokri na Gọọmentị <mask> enweghị ihe ha ga-eji hiwe ya bụ mmachi.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + OPUS CC-Align + [IGBO NLP Corpus](https://github.com/IgnatiusEzeani/IGBONLP) +[Igbo CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | ig_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 84.51 | 87.74
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-kinyarwanda | eb8a4a3573047b37b6be993c7b04c6374ca30fd6 | 2021-06-15T20:24:02.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"rw",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/xlm-roberta-base-finetuned-kinyarwanda | 16 | null | transformers | 9,164 | Hugging Face's logo
---
language: rw
datasets:
---
# xlm-roberta-base-finetuned-kinyarwanda
## Model description
**xlm-roberta-base-finetuned-kinyarwanda** is a **Kinyarwanda RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Kinyarwanda language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Kinyarwanda corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-kinyarwanda')
>>> unmasker("Twabonye ko igihe mu <mask> hazaba hari ikirango abantu bakunze")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [KIRNEWS](https://github.com/Andrews2017/KINNEWS-and-KIRNEWS-Corpus) + [BBC Gahuza](https://www.bbc.com/gahuza)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | rw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 73.22 | 77.76
### BibTeX entry and citation info
By David Adelani
```
```
|
EhsanAghazadeh/xlnet-large-cased-CoLA_B | a5d146f1c4c291db0758e1c287ff7242c3dff7cc | 2021-04-19T10:59:46.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlnet-large-cased-CoLA_B | 16 | null | transformers | 9,165 | Entry not found |
FFZG-cleopatra/bert-emoji-latvian-twitter | 42b35cea36e13ed9662840dcd9ecaa8db7595cc0 | 2021-05-18T18:33:26.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | FFZG-cleopatra | null | FFZG-cleopatra/bert-emoji-latvian-twitter | 16 | null | transformers | 9,166 | Entry not found |
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell | 27c2d35249d3976ee0b4d15b0eda65025dd18b70 | 2022-03-23T18:28:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl_BE",
"nl_NL",
"robust-speech-event",
"model-index"
] | automatic-speech-recognition | false | FremyCompany | null | FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell | 16 | 1 | transformers | 9,167 | ---
language:
- nl
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- nl_BE
- nl_NL
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-nl-v1-cv8-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 3.93
- name: Test CER
type: cer
value: 1.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 16.35
- name: Test CER
type: cer
value: 9.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 15.81
---
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.03931
- Cer: 0.01224
> **IMPORTANT NOTE**: The `hunspell` typo fixer is **not enabled** on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the `eval.py` decoding script. For best results, please use the code in that file while using the model locally for inference.
> **IMPORTANT NOTE**: Evaluating this model requires `apt install libhunspell-dev` and a pip install of `hunspell` in addition to pip installs of `pipy-kenlm` and `pyctcdecode` (see `install_requirements.sh`); in addition, the chunking lengths and strides were optimized for the model as `12s` and `2s` respectively (see `eval.sh`).
> **QUICK REMARK**: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance `2014` in the dev set is left as a number but will be recognized as `tweeduizend veertien`, which counts as 3 mistakes (`2014` missing, and both `tweeduizend` and `veertien` wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (`ja`, etc...). As a result, our real error rate on the dev set is significantly lower than reported.
>
> 
>
> You can compare the [predictions](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_predictions.txt) with the [targets](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_targets.txt) on the validation dev set yourself, for example using [this diffing tool](https://countwordsfree.com/comparetexts).
> **WE DO SPEECH RECOGNITION**: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to [contact our team](https://www.ugent.be/ea/idlab/en/research/semantic-intelligence/speech-and-audio-processing.htm). This model was developped during the [Robust Speech Recognition challenge](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) event by [François REMY](https://www.linkedin.com/in/fremycompany/) [(twitter)](https://twitter.com/FremyCompany) and [Geoffroy VANDERREYDT](https://be.linkedin.com/in/geoffroy-vanderreydt-a4421460).
> We would like to thank [OVH](https://www.ovhcloud.com/en/public-cloud/ai-training/) for providing us with a V100S GPU.
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
To further deal with typos, `hunspell` is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct `collegas` into `collega's` or `gogol` into `google`.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
GD/cq-bert-model-repo | 6058c3537d6dc0ea8305100de6d7df5aa8f78f02 | 2021-05-18T18:40:31.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | GD | null | GD/cq-bert-model-repo | 16 | null | transformers | 9,168 | Entry not found |
Geotrend/bert-base-sw-cased | 7d9ca957a81d2449cf1319af0b91f75f11642336 | 2021-05-18T20:10:30.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"sw",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-sw-cased | 16 | null | transformers | 9,169 | ---
language: sw
datasets: wikipedia
license: apache-2.0
---
# bert-base-sw-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-sw-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Helsinki-NLP/opus-mt-cau-en | 593400efa7e1ae4f6cf96ed9ff1d524099a47ad5 | 2021-01-18T07:53:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ab",
"ka",
"ce",
"cau",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cau-en | 16 | null | transformers | 9,170 | ---
language:
- ab
- ka
- ce
- cau
- en
tags:
- translation
license: apache-2.0
---
### cau-eng
* source group: Caucasian languages
* target group: English
* OPUS readme: [cau-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md)
* model: transformer
* source language(s): abk ady che kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.abk-eng.abk.eng | 0.3 | 0.134 |
| Tatoeba-test.ady-eng.ady.eng | 0.4 | 0.104 |
| Tatoeba-test.che-eng.che.eng | 0.6 | 0.128 |
| Tatoeba-test.kat-eng.kat.eng | 18.6 | 0.366 |
| Tatoeba-test.multi.eng | 16.6 | 0.351 |
### System Info:
- hf_name: cau-eng
- source_languages: cau
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ab', 'ka', 'ce', 'cau', 'en']
- src_constituents: {'abk', 'kat', 'che', 'ady'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cau
- tgt_alpha3: eng
- short_pair: cau-en
- chrF2_score: 0.35100000000000003
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 6285.0
- src_name: Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cau
- tgt_alpha2: en
- prefer_old: False
- long_pair: cau-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-de-el | ad3da773c26cf72780d46b4a75333226a19760e4 | 2021-09-09T21:30:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"el",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-el | 16 | null | transformers | 9,171 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-el
* source languages: de
* target languages: el
* OPUS readme: [de-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-el/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-el/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-el/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.el | 45.7 | 0.649 |
|
Helsinki-NLP/opus-mt-en-cel | d1a4b332d5c90e1651d1e280dfb3a80b7ea07059 | 2021-01-18T08:05:58.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-cel | 16 | null | transformers | 9,172 | ---
language:
- en
- gd
- ga
- br
- kw
- gv
- cy
- cel
tags:
- translation
license: apache-2.0
---
### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md)
* model: transformer
* source language(s): eng
* target language(s): bre cor cym gla gle glv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bre.eng.bre | 11.5 | 0.338 |
| Tatoeba-test.eng-cor.eng.cor | 0.3 | 0.095 |
| Tatoeba-test.eng-cym.eng.cym | 31.0 | 0.549 |
| Tatoeba-test.eng-gla.eng.gla | 7.6 | 0.317 |
| Tatoeba-test.eng-gle.eng.gle | 35.9 | 0.582 |
| Tatoeba-test.eng-glv.eng.glv | 9.9 | 0.454 |
| Tatoeba-test.eng.multi | 18.0 | 0.342 |
### System Info:
- hf_name: eng-cel
- source_languages: eng
- target_languages: cel
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']
- src_constituents: {'eng'}
- tgt_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cel
- short_pair: en-cel
- chrF2_score: 0.342
- bleu: 18.0
- brevity_penalty: 0.9590000000000001
- ref_len: 45370.0
- src_name: English
- tgt_name: Celtic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cel
- prefer_old: False
- long_pair: eng-cel
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-en-iso | 8c635a5689d7815ecc3453d48e4dee1e367b60ad | 2021-09-09T21:36:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"iso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-iso | 16 | null | transformers | 9,173 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-iso
* source languages: en
* target languages: iso
* OPUS readme: [en-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.iso | 35.7 | 0.523 |
|
Helsinki-NLP/opus-mt-en-lun | 43db264508a6c6cd238433c9ba3b37d00e40d8b0 | 2021-09-09T21:37:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lun",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-lun | 16 | null | transformers | 9,174 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-lun
* source languages: en
* target languages: lun
* OPUS readme: [en-lun](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lun/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lun | 28.9 | 0.552 |
|
Helsinki-NLP/opus-mt-en-rnd | 28b5064598c0673a1c090c9e5f2a0263d112a6dd | 2021-09-09T21:38:40.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"rnd",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-rnd | 16 | null | transformers | 9,175 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-rnd
* source languages: en
* target languages: rnd
* OPUS readme: [en-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.rnd | 34.5 | 0.571 |
|
Helsinki-NLP/opus-mt-es-bg | 7ae5fe6ec5bf8418cb17265f64a8311706679320 | 2021-01-18T08:22:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-bg | 16 | null | transformers | 9,176 | ---
language:
- es
- bg
tags:
- translation
license: apache-2.0
---
### spa-bul
* source group: Spanish
* target group: Bulgarian
* OPUS readme: [spa-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md)
* model: transformer
* source language(s): spa
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.bul | 50.9 | 0.674 |
### System Info:
- hf_name: spa-bul
- source_languages: spa
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'bg']
- src_constituents: {'spa'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt
- src_alpha3: spa
- tgt_alpha3: bul
- short_pair: es-bg
- chrF2_score: 0.674
- bleu: 50.9
- brevity_penalty: 0.955
- ref_len: 1707.0
- src_name: Spanish
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: es
- tgt_alpha2: bg
- prefer_old: False
- long_pair: spa-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-fj | 5e65adc44f3d16417b98961603e0e3e5a9289441 | 2021-09-09T21:42:23.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"fj",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-fj | 16 | null | transformers | 9,177 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-fj
* source languages: es
* target languages: fj
* OPUS readme: [es-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.fj | 24.8 | 0.472 |
|
Helsinki-NLP/opus-mt-es-gil | 6830b9bb1c10b0aeddda5a868c1a80cbb51a5419 | 2021-09-09T21:42:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"gil",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-gil | 16 | null | transformers | 9,178 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-gil
* source languages: es
* target languages: gil
* OPUS readme: [es-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.gil | 23.8 | 0.470 |
|
Helsinki-NLP/opus-mt-es-iso | 2be50239f12c54423830965a7b6d21224a732502 | 2021-09-09T21:43:13.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"iso",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-iso | 16 | null | transformers | 9,179 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-iso
* source languages: es
* target languages: iso
* OPUS readme: [es-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.iso | 22.4 | 0.396 |
|
Helsinki-NLP/opus-mt-es-xh | e3d7a2282fb56f5e67f544b663b8b0c99ed5bdab | 2021-09-09T21:45:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"xh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-xh | 16 | null | transformers | 9,180 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-xh
* source languages: es
* target languages: xh
* OPUS readme: [es-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-xh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-xh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-xh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.xh | 25.0 | 0.541 |
|
Helsinki-NLP/opus-mt-euq-en | 998d513f5e3badcdb36381d4b161987e95b571b7 | 2021-01-18T08:31:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"euq",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-euq-en | 16 | null | transformers | 9,181 | ---
language:
- euq
- en
tags:
- translation
license: apache-2.0
---
### euq-eng
* source group: Basque (family)
* target group: English
* OPUS readme: [euq-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/euq-eng/README.md)
* model: transformer
* source language(s): eus
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/euq-eng/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/euq-eng/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/euq-eng/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eus.eng | 41.5 | 0.594 |
| Tatoeba-test.eus-eng.eus.eng | 41.5 | 0.594 |
### System Info:
- hf_name: euq-eng
- source_languages: euq
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/euq-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['euq', 'en']
- src_constituents: {'eus'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/euq-eng/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/euq-eng/opus-2020-07-26.test.txt
- src_alpha3: euq
- tgt_alpha3: eng
- short_pair: euq-en
- chrF2_score: 0.594
- bleu: 41.5
- brevity_penalty: 0.9640000000000001
- ref_len: 8157.0
- src_name: Basque (family)
- tgt_name: English
- train_date: 2020-07-26
- src_alpha2: euq
- tgt_alpha2: en
- prefer_old: False
- long_pair: euq-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fi-ru | d34952308f6510507b39b81818e07ab83c7d4f4d | 2021-09-09T21:50:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-ru | 16 | null | transformers | 9,182 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-ru
* source languages: fi
* target languages: ru
* OPUS readme: [fi-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.ru | 46.3 | 0.670 |
|
Helsinki-NLP/opus-mt-fiu-fiu | e97528e540d0b0527b643c9fea7da165ea79f044 | 2021-01-18T08:41:00.000Z | [
"pytorch",
"marian",
"text2text-generation",
"se",
"fi",
"hu",
"et",
"fiu",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fiu-fiu | 16 | null | transformers | 9,183 | ---
language:
- se
- fi
- hu
- et
- fiu
tags:
- translation
license: apache-2.0
---
### fiu-fiu
* source group: Finno-Ugrian languages
* target group: Finno-Ugrian languages
* OPUS readme: [fiu-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est-est.est.est | 2.0 | 0.252 |
| Tatoeba-test.est-fin.est.fin | 51.0 | 0.704 |
| Tatoeba-test.est-fkv.est.fkv | 1.1 | 0.211 |
| Tatoeba-test.est-vep.est.vep | 3.1 | 0.272 |
| Tatoeba-test.fin-est.fin.est | 55.2 | 0.722 |
| Tatoeba-test.fin-fkv.fin.fkv | 1.6 | 0.207 |
| Tatoeba-test.fin-hun.fin.hun | 42.4 | 0.663 |
| Tatoeba-test.fin-izh.fin.izh | 12.9 | 0.509 |
| Tatoeba-test.fin-krl.fin.krl | 4.6 | 0.292 |
| Tatoeba-test.fkv-est.fkv.est | 2.4 | 0.148 |
| Tatoeba-test.fkv-fin.fkv.fin | 15.1 | 0.427 |
| Tatoeba-test.fkv-liv.fkv.liv | 1.2 | 0.261 |
| Tatoeba-test.fkv-vep.fkv.vep | 1.2 | 0.233 |
| Tatoeba-test.hun-fin.hun.fin | 47.8 | 0.681 |
| Tatoeba-test.izh-fin.izh.fin | 24.0 | 0.615 |
| Tatoeba-test.izh-krl.izh.krl | 1.8 | 0.114 |
| Tatoeba-test.krl-fin.krl.fin | 13.6 | 0.407 |
| Tatoeba-test.krl-izh.krl.izh | 2.7 | 0.096 |
| Tatoeba-test.liv-fkv.liv.fkv | 1.2 | 0.164 |
| Tatoeba-test.liv-vep.liv.vep | 3.4 | 0.181 |
| Tatoeba-test.multi.multi | 36.7 | 0.581 |
| Tatoeba-test.vep-est.vep.est | 3.4 | 0.251 |
| Tatoeba-test.vep-fkv.vep.fkv | 1.2 | 0.215 |
| Tatoeba-test.vep-liv.vep.liv | 3.4 | 0.179 |
### System Info:
- hf_name: fiu-fiu
- source_languages: fiu
- target_languages: fiu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'fiu']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt
- src_alpha3: fiu
- tgt_alpha3: fiu
- short_pair: fiu-fiu
- chrF2_score: 0.581
- bleu: 36.7
- brevity_penalty: 0.981
- ref_len: 19444.0
- src_name: Finno-Ugrian languages
- tgt_name: Finno-Ugrian languages
- train_date: 2020-07-26
- src_alpha2: fiu
- tgt_alpha2: fiu
- prefer_old: False
- long_pair: fiu-fiu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-bcl | ce60f394625acd7cc59cbdcf1766ae5d7d698ecf | 2021-09-09T21:52:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"bcl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-bcl | 16 | null | transformers | 9,184 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-bcl
* source languages: fr
* target languages: bcl
* OPUS readme: [fr-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bcl | 35.9 | 0.566 |
|
Helsinki-NLP/opus-mt-ilo-fi | 737504a377f6cbc1d343dabb08fc8ee4a51c0ee2 | 2021-09-09T22:12:01.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ilo",
"fi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ilo-fi | 16 | null | transformers | 9,185 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ilo-fi
* source languages: ilo
* target languages: fi
* OPUS readme: [ilo-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.fi | 27.7 | 0.516 |
|
Helsinki-NLP/opus-mt-kab-en | a35e4ac154b973afd28e971dbaabdea228a7da47 | 2021-09-10T13:53:34.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kab",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kab-en | 16 | null | transformers | 9,186 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kab-en
* source languages: kab
* target languages: en
* OPUS readme: [kab-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kab-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.kab.en | 27.5 | 0.408 |
|
Helsinki-NLP/opus-mt-kwn-en | 1cb7ac7558ae7f2c7bedafe47d8b46572aeef605 | 2021-09-10T13:54:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"kwn",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-kwn-en | 16 | null | transformers | 9,187 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-kwn-en
* source languages: kwn
* target languages: en
* OPUS readme: [kwn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.kwn.en | 27.5 | 0.434 |
|
Helsinki-NLP/opus-mt-nic-en | 9127f7835c52a5389bd8e6c5f27dd44f6b010230 | 2020-08-21T14:42:48.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"nic",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-nic-en | 16 | null | transformers | 9,188 | ---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- nic
- en
tags:
- translation
license: apache-2.0
---
### nic-eng
* source group: Niger-Kordofanian languages
* target group: English
* OPUS readme: [nic-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md)
* model: transformer
* source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bam-eng.bam.eng | 2.4 | 0.090 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.3 | 0.384 |
| Tatoeba-test.ful-eng.ful.eng | 1.2 | 0.114 |
| Tatoeba-test.ibo-eng.ibo.eng | 7.5 | 0.197 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 3.1 | 0.185 |
| Tatoeba-test.lug-eng.lug.eng | 3.1 | 0.261 |
| Tatoeba-test.multi.eng | 21.3 | 0.377 |
| Tatoeba-test.nya-eng.nya.eng | 31.6 | 0.502 |
| Tatoeba-test.run-eng.run.eng | 24.9 | 0.420 |
| Tatoeba-test.sag-eng.sag.eng | 5.2 | 0.231 |
| Tatoeba-test.sna-eng.sna.eng | 20.1 | 0.374 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.191 |
| Tatoeba-test.toi-eng.toi.eng | 4.8 | 0.122 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 9.0 | 0.246 |
| Tatoeba-test.wol-eng.wol.eng | 14.0 | 0.212 |
| Tatoeba-test.xho-eng.xho.eng | 38.2 | 0.558 |
| Tatoeba-test.yor-eng.yor.eng | 21.2 | 0.364 |
| Tatoeba-test.zul-eng.zul.eng | 42.3 | 0.589 |
### System Info:
- hf_name: nic-eng
- source_languages: nic
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']
- src_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt
- src_alpha3: nic
- tgt_alpha3: eng
- short_pair: nic-en
- chrF2_score: 0.377
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 15228.0
- src_name: Niger-Kordofanian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: nic
- tgt_alpha2: en
- prefer_old: False
- long_pair: nic-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-nl | 943288a8d7959a1bb49be2d7a288ef1af0d4abe4 | 2021-09-10T14:08:29.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-nl | 16 | null | transformers | 9,189 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-nl
* source languages: sv
* target languages: nl
* OPUS readme: [sv-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.sv.nl | 24.3 | 0.522 |
|
Helsinki-NLP/opus-mt-tvl-en | f9a9b5dc37025ce839d91c066c3e5198cbcf5747 | 2021-09-11T10:50:22.000Z | [
"pytorch",
"marian",
"text2text-generation",
"tvl",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tvl-en | 16 | null | transformers | 9,190 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tvl-en
* source languages: tvl
* target languages: en
* OPUS readme: [tvl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.en | 37.3 | 0.528 |
|
Helsinki-NLP/opus-mt-ve-en | 35d134d0489f1503333835c46c10f7f724515c50 | 2021-09-11T10:51:37.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ve",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ve-en | 16 | 1 | transformers | 9,191 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ve-en
* source languages: ve
* target languages: en
* OPUS readme: [ve-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ve-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ve.en | 41.3 | 0.566 |
|
Helsinki-NLP/opus-mt-zh-sv | 96cdf47752b6449a806039ca62c9aa541913c575 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"sv",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-sv | 16 | null | transformers | 9,192 | ---
language:
- zh
- sv
tags:
- translation
license: apache-2.0
---
### zho-swe
* source group: Chinese
* target group: Swedish
* OPUS readme: [zho-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Latn
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.swe | 46.1 | 0.621 |
### System Info:
- hf_name: zho-swe
- source_languages: zho
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'sv']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: swe
- short_pair: zh-sv
- chrF2_score: 0.621
- bleu: 46.1
- brevity_penalty: 0.956
- ref_len: 6223.0
- src_name: Chinese
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: sv
- prefer_old: False
- long_pair: zho-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KBLab/bert-base-swedish-cased-squad-experimental | 80399822d04785fc66ab4e214159808204e1acf0 | 2021-05-18T21:21:57.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | KBLab | null | KBLab/bert-base-swedish-cased-squad-experimental | 16 | null | transformers | 9,193 | Entry not found |
KoichiYasuoka/roberta-base-japanese-aozora | 3e995c5d15bdcac6df150a943d8b5cd35c783b8e | 2022-02-13T02:02:34.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-japanese-aozora | 16 | null | transformers | 9,194 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-base-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-base-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-japanese-aozora")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
|
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER | dcbaef9c349ae1be1f5b533450b14cbbc33e120f | 2021-11-27T22:13:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Matthijsvanhof | null | Matthijsvanhof/bert-base-dutch-cased-finetuned-NER | 16 | null | transformers | 9,195 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-dutch-cased-finetuned-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-NER
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1078
- Precision: 0.6129
- Recall: 0.6639
- F1: 0.6374
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 267 | 0.1131 | 0.6090 | 0.6264 | 0.6176 | 0.9678 |
| 0.1495 | 2.0 | 534 | 0.1078 | 0.6129 | 0.6639 | 0.6374 | 0.9688 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
NYTK/sentiment-hts5-hubert-hungarian | 074fff77a6a871d46322192462eb7fba1144c0e3 | 2022-01-26T13:18:49.000Z | [
"pytorch",
"bert",
"text-classification",
"hu",
"transformers",
"license:gpl"
] | text-classification | false | NYTK | null | NYTK/sentiment-hts5-hubert-hungarian | 16 | null | transformers | 9,196 | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with huBERT
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: huBERT
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2, 3, 4, 5
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | 85.55 | **68.99** |
| XLM-RoBERTa| 85.56 | 85.56 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` |
PremalMatalia/roberta-base-best-squad2 | 15bc545a70ee587dc6930c3677c3420edf4fda1b | 2021-08-04T18:54:35.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad_v2",
"transformers",
"autotrain_compatible"
] | question-answering | false | PremalMatalia | null | PremalMatalia/roberta-base-best-squad2 | 16 | 1 | transformers | 9,197 | ---
datasets:
- squad_v2
---
# RoBERTa-base for QA
## Overview
**Language model:** 'roberta-base' </br>
**Language:** English </br>
**Downstream-task:** Extractive QA </br>
**Training data:** SQuAD 2.0 </br>
**Eval data:** SQuAD 2.0 </br>
**Code:** <TBD> </br>
## Env Information
`transformers` version: 4.9.1 </br>
Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic </br>
Python version: 3.7.11 </br>
PyTorch version (GPU?): 1.9.0+cu102 (False)</br>
Tensorflow version (GPU?): 2.5.0 (False)</br>
## Hyperparameters
```
max_seq_len=386
doc_stride=128
n_best_size=20
max_answer_length=30
min_null_score=7.0
batch_size=8
n_epochs=6
base_LM_model = "roberta-base"
learning_rate=1.5e-5
adam_epsilon=1e-5
adam_beta1=0.95
adam_beta2=0.999
warmup_steps=100
weight_decay=0.01
optimizer=AdamW
lr_scheduler="polynomial"
```
##### There is a special threshold value CLS_threshold=-3 used to more accurately identify no answers [Logic will be available in GitHub Repo [TBD]
## Performance
```
"exact": 81.192622
"f1": 83.95408
"total": 11873
"HasAns_exact": 74.190283
"HasAns_f1": 79.721119
"HasAns_total": 5928
"NoAns_exact": 88.174937
"NoAns_f1": 88.174937
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "PremalMatalia/roberta-base-best-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Which name is also used to describe the Amazon rainforest in English?',
'context': 'The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet\'s remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.'
}
res = nlp(QA_input)
print(res)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Premal Matalia |
SEBIS/code_trans_t5_small_code_documentation_generation_java | 1e77edf2b3efbff4135e04f2d3a3157849b666d6 | 2021-06-23T10:00:26.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_code_documentation_generation_java | 16 | null | transformers | 9,198 | ---
tags:
- summarization
widget:
- text: "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
---
# CodeTrans model for code documentation generation java
Pretrained model on programming language java using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus java dataset.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java", skip_special_tokens=True),
device=0
)
tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/java/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
TehranNLP/xlnet-base-cased-mnli | faa85bf85bc85d3b4762c6dc0688c679c7715a25 | 2021-06-03T08:36:16.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | false | TehranNLP | null | TehranNLP/xlnet-base-cased-mnli | 16 | null | transformers | 9,199 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.