modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tiennvcs/bert-large-uncased-finetuned-docvqa | 8bf070f2d2c46e3423869d4988b8a9310fdf731a | 2021-10-23T17:43:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | tiennvcs | null | tiennvcs/bert-large-uncased-finetuned-docvqa | 3 | null | transformers | 21,800 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-docvqa
results:
- task:
name: Question Answering
type: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-docvqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.5228 | 0.05 | 1000 | 2.6645 |
| 2.4909 | 0.1 | 2000 | 2.8985 |
| 2.1679 | 0.16 | 3000 | 2.3551 |
| 1.9451 | 0.21 | 4000 | 2.2226 |
| 1.6814 | 0.26 | 5000 | 2.1590 |
| 1.8868 | 0.31 | 6000 | 2.6197 |
| 1.6618 | 0.36 | 7000 | 2.3632 |
| 1.8313 | 0.41 | 8000 | 2.4519 |
| 1.7017 | 0.47 | 9000 | 2.2682 |
| 1.8169 | 0.52 | 10000 | 2.4486 |
| 1.7074 | 0.57 | 11000 | 2.3862 |
| 1.7674 | 0.62 | 12000 | 2.1801 |
| 1.8134 | 0.67 | 13000 | 2.3032 |
| 1.8334 | 0.73 | 14000 | 2.4205 |
| 1.6819 | 0.78 | 15000 | 2.2398 |
| 1.5846 | 0.83 | 16000 | 2.3834 |
| 1.6758 | 0.88 | 17000 | 1.9683 |
| 1.6303 | 0.93 | 18000 | 2.3297 |
| 1.5652 | 0.98 | 19000 | 2.0581 |
| 1.3045 | 1.04 | 20000 | 2.4950 |
| 1.2393 | 1.09 | 21000 | 2.6622 |
| 1.1526 | 1.14 | 22000 | 2.3749 |
| 1.2631 | 1.19 | 23000 | 2.3915 |
| 1.1846 | 1.24 | 24000 | 2.2592 |
| 1.2731 | 1.3 | 25000 | 2.4239 |
| 1.3057 | 1.35 | 26000 | 2.2920 |
| 1.134 | 1.4 | 27000 | 2.3107 |
| 1.2017 | 1.45 | 28000 | 2.4271 |
| 1.2202 | 1.5 | 29000 | 2.1814 |
| 1.2179 | 1.56 | 30000 | 2.3365 |
| 1.2359 | 1.61 | 31000 | 2.1256 |
| 1.1964 | 1.66 | 32000 | 2.1720 |
| 1.269 | 1.71 | 33000 | 2.4363 |
| 1.1812 | 1.76 | 34000 | 2.2372 |
| 1.2187 | 1.81 | 35000 | 2.2318 |
| 1.1805 | 1.87 | 36000 | 2.3693 |
| 1.1458 | 1.92 | 37000 | 2.5128 |
| 1.1958 | 1.97 | 38000 | 2.1311 |
| 0.8924 | 2.02 | 39000 | 2.4635 |
| 0.869 | 2.07 | 40000 | 2.8231 |
| 0.8333 | 2.13 | 41000 | 2.6762 |
| 0.9194 | 2.18 | 42000 | 2.4588 |
| 0.8089 | 2.23 | 43000 | 2.6443 |
| 0.8612 | 2.28 | 44000 | 2.4300 |
| 0.7981 | 2.33 | 45000 | 2.7418 |
| 0.9765 | 2.38 | 46000 | 2.6543 |
| 0.8646 | 2.44 | 47000 | 2.5990 |
| 1.0316 | 2.49 | 48000 | 2.4625 |
| 0.9862 | 2.54 | 49000 | 2.4691 |
| 1.027 | 2.59 | 50000 | 2.4156 |
| 0.9412 | 2.64 | 51000 | 2.4204 |
| 0.9353 | 2.7 | 52000 | 2.4933 |
| 0.9509 | 2.75 | 53000 | 2.4708 |
| 0.9351 | 2.8 | 54000 | 2.5351 |
| 0.9968 | 2.85 | 55000 | 2.2506 |
| 1.025 | 2.9 | 56000 | 2.6317 |
| 1.627 | 2.95 | 57000 | 2.7843 |
| 0.9294 | 3.01 | 58000 | 2.9396 |
| 0.6043 | 3.06 | 59000 | 3.1560 |
| 0.7903 | 3.11 | 60000 | 2.8330 |
| 0.7373 | 3.16 | 61000 | 2.9422 |
| 0.6499 | 3.21 | 62000 | 3.0948 |
| 0.6411 | 3.27 | 63000 | 2.7900 |
| 0.625 | 3.32 | 64000 | 2.5268 |
| 0.6264 | 3.37 | 65000 | 2.8701 |
| 0.6143 | 3.42 | 66000 | 3.2544 |
| 0.6286 | 3.47 | 67000 | 2.6208 |
| 0.739 | 3.53 | 68000 | 2.8107 |
| 0.5981 | 3.58 | 69000 | 2.8073 |
| 0.6502 | 3.63 | 70000 | 2.6293 |
| 0.6548 | 3.68 | 71000 | 2.9501 |
| 0.7243 | 3.73 | 72000 | 2.7917 |
| 0.598 | 3.78 | 73000 | 2.9341 |
| 0.6159 | 3.84 | 74000 | 2.7629 |
| 0.5905 | 3.89 | 75000 | 2.6441 |
| 0.6393 | 3.94 | 76000 | 2.6660 |
| 0.677 | 3.99 | 77000 | 2.7616 |
| 0.3281 | 4.04 | 78000 | 3.6873 |
| 0.4524 | 4.1 | 79000 | 3.3441 |
| 0.3994 | 4.15 | 80000 | 3.3129 |
| 0.4686 | 4.2 | 81000 | 3.1813 |
| 0.5293 | 4.25 | 82000 | 2.9088 |
| 0.3961 | 4.3 | 83000 | 3.0765 |
| 0.4406 | 4.35 | 84000 | 3.1254 |
| 0.401 | 4.41 | 85000 | 3.2415 |
| 0.4594 | 4.46 | 86000 | 3.0691 |
| 0.4523 | 4.51 | 87000 | 3.0493 |
| 0.4719 | 4.56 | 88000 | 3.1352 |
| 0.4895 | 4.61 | 89000 | 2.8991 |
| 0.423 | 4.67 | 90000 | 3.1738 |
| 0.3984 | 4.72 | 91000 | 3.1862 |
| 0.4206 | 4.77 | 92000 | 3.1213 |
| 0.4587 | 4.82 | 93000 | 3.0030 |
| 0.381 | 4.87 | 94000 | 3.3218 |
| 0.4138 | 4.92 | 95000 | 3.1529 |
| 0.4003 | 4.98 | 96000 | 3.1375 |
| 0.2098 | 5.03 | 97000 | 3.7443 |
| 0.2334 | 5.08 | 98000 | 3.7359 |
| 0.2534 | 5.13 | 99000 | 3.7814 |
| 0.3067 | 5.18 | 100000 | 3.7128 |
| 0.2363 | 5.24 | 101000 | 3.6091 |
| 0.2652 | 5.29 | 102000 | 3.4015 |
| 0.3311 | 5.34 | 103000 | 3.4793 |
| 0.2344 | 5.39 | 104000 | 3.6792 |
| 0.2741 | 5.44 | 105000 | 3.5385 |
| 0.2896 | 5.5 | 106000 | 3.8118 |
| 0.2071 | 5.55 | 107000 | 3.8690 |
| 0.3023 | 5.6 | 108000 | 3.7087 |
| 0.3299 | 5.65 | 109000 | 3.4925 |
| 0.1943 | 5.7 | 110000 | 3.6739 |
| 0.2488 | 5.75 | 111000 | 3.7614 |
| 0.3138 | 5.81 | 112000 | 3.5156 |
| 0.2555 | 5.86 | 113000 | 3.6056 |
| 0.2918 | 5.91 | 114000 | 3.6533 |
| 0.2751 | 5.96 | 115000 | 3.6367 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.8.0+cu101
- Datasets 1.11.0
- Tokenizers 0.10.3
|
tillfurger/twitter-sent | 9349c56b1aba326fc30a63597ef057f95c7c0078 | 2021-05-20T07:50:40.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tillfurger | null | tillfurger/twitter-sent | 3 | null | transformers | 21,801 | Entry not found |
tkesonia/xlm-roberta-base-finetuned-marc-en | 37dcdbc3194c002ef60f487d287fb860638c2dd2 | 2021-11-08T08:53:12.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | tkesonia | null | tkesonia/xlm-roberta-base-finetuned-marc-en | 3 | null | transformers | 21,802 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9211
- Mae: 0.5122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1436 | 1.0 | 235 | 1.0181 | 0.5366 |
| 0.9756 | 2.0 | 470 | 0.9211 | 0.5122 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
tkwoo/electra-small-discriminator | 301bbc2670c7654a418be48bb33c20227446cb70 | 2020-06-04T08:01:53.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | tkwoo | null | tkwoo/electra-small-discriminator | 3 | null | transformers | 21,803 | Entry not found |
toanparadox/test_nlp | 723bbaa8ccf5f85609c53ca23dadf57a9f6ed604 | 2021-10-28T08:03:16.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | toanparadox | null | toanparadox/test_nlp | 3 | null | transformers | 21,804 | Entry not found |
toasthans/Facebook_and_Twitter_Ohne_HPS | 11be27e2f2bd1ed3e62ba7364574cdf0c947ef26 | 2021-12-23T14:55:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | toasthans | null | toasthans/Facebook_and_Twitter_Ohne_HPS | 3 | null | transformers | 21,805 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Facebook_and_Twitter_Ohne_HPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facebook_and_Twitter_Ohne_HPS
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9218
- Accuracy: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4364 | 1.0 | 713 | 0.4107 | 0.8302 |
| 0.2843 | 2.0 | 1426 | 0.4316 | 0.8495 |
| 0.0869 | 3.0 | 2139 | 0.7700 | 0.8558 |
| 0.0443 | 4.0 | 2852 | 0.9218 | 0.8512 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
toastynews/electra-hongkongese-large-discriminator | f064067a43fbda6859f6a8e0b20467c103fb6078 | 2020-07-07T17:56:12.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | null | false | toastynews | null | toastynews/electra-hongkongese-large-discriminator | 3 | null | transformers | 21,806 | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Large
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the large model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 96 |
| Max Sequence Size | 512 |
| Mask Prob | 0.25 |
| Learning Rate | 2e-4 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 88.8 / 93.6 | 79.8 | 70.4 | 90.4 |
| Hongkongese | 84.7 / 90.9 | 79.7 | 69.9 | 91.5 |
|
tobiaslee/roberta-base-defteval-t6-st3 | f2a46818c064d321d7ab62e954d066179c078d19 | 2021-06-23T07:46:04.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | tobiaslee | null | tobiaslee/roberta-base-defteval-t6-st3 | 3 | null | transformers | 21,807 | Entry not found |
tobiaslee/roberta-large-defteval-t6-st3 | e83c37466383bcaa0e6908b685c64a91bcf11c5b | 2021-06-23T07:39:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | tobiaslee | null | tobiaslee/roberta-large-defteval-t6-st3 | 3 | null | transformers | 21,808 | Entry not found |
tomato/sentiment_analysis | cb0a9bb83cf585e23bf1ab7d5d8965e34080a2e9 | 2021-06-03T18:55:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tomato | null | tomato/sentiment_analysis | 3 | null | transformers | 21,809 | Entry not found |
tommy19970714/wav2vec2-base-960h | c8a9eeb4e0adbe64aff0d10d63285a2443f48cd4 | 2021-11-04T16:09:10.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | tommy19970714 | null | tommy19970714/wav2vec2-base-960h | 3 | null | transformers | 21,810 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# Wav2Vec2-Base-960h
This repository is a reimplementation of [official Facebook’s wav2vec](https://huggingface.co/facebook/wav2vec2-base-960h).
There is no description of converting the wav2vec [pretrain model](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20) to a pytorch.bin file.
We are rebuilding pytorch.bin from the pretrain model.
Here is the conversion method.
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt -O ./wav2vec_small_960h.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./wav2vec_small_960h.pt --dict_path ./dict
```
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = tokenizer(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 |
# Reference
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Facebook's huggingface Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
[Paper](https://arxiv.org/abs/2006.11477)
|
tr3cks/SentimentAnalysis_BETO | 11332e451d17954b84f46fdf3347d1673273c693 | 2021-05-20T08:03:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tr3cks | null | tr3cks/SentimentAnalysis_BETO | 3 | null | transformers | 21,811 | Entry not found |
transfaeries/DialoGPT-small-Discord-1.0 | d29459ffd3cfd290b7344f961505c5f52abcfcae | 2021-08-31T20:15:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | transfaeries | null | transfaeries/DialoGPT-small-Discord-1.0 | 3 | null | transformers | 21,812 | ---
tags:
- conversational
---
# Discord Model |
transformersbook/xlm-roberta-base-finetuned-panx-de-fr | 85d2a32862b91dcf97af53e85136cd558b7baed5 | 2022-02-05T17:08:13.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | transformersbook | null | transformersbook/xlm-roberta-base-finetuned-panx-de-fr | 3 | null | transformers | 21,813 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1616
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2855 | 1.0 | 715 | 0.1944 | 0.8178 |
| 0.1485 | 2.0 | 1430 | 0.1679 | 0.8469 |
| 0.0966 | 3.0 | 2145 | 0.1616 | 0.8590 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
ttajun/bert_nm30k_posneg01 | 076b29e48bd8ef4d9b7b352733a80a7e8f8ddb23 | 2021-12-14T01:36:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ttajun | null | ttajun/bert_nm30k_posneg01 | 3 | null | transformers | 21,814 | Entry not found |
ttajun/bert_nm50k_posneg01 | 9735d594e3fdc04eb7d8e1714f872dcbe9338222 | 2021-12-14T06:36:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ttajun | null | ttajun/bert_nm50k_posneg01 | 3 | null | transformers | 21,815 | Entry not found |
ttajun/bert_nm70k_posneg01 | 750a9ea78ed251f45f886e82d87e4efc905d715f | 2021-12-22T02:09:38.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ttajun | null | ttajun/bert_nm70k_posneg01 | 3 | null | transformers | 21,816 | Entry not found |
ttajun/nsmc_klue_01 | d44a3e60ac4f2dad45b249c545d044b4f6322f5e | 2021-11-23T01:49:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ttajun | null | ttajun/nsmc_klue_01 | 3 | null | transformers | 21,817 | Entry not found |
tuanle/GPT2_Poet | 7249294d9015de959f140701496eb6d2060c12b1 | 2022-02-26T11:32:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | tuanle | null | tuanle/GPT2_Poet | 3 | null | transformers | 21,818 | # GPT-2 Fine-tuning With Vietnamese Six Eight Poems
## Model description
This is a Vietnamese GPT-2 Six Eight Poet Model which is trained on the 10mb of Six Eight poems dataset, based on the Vietnamese Wiki GPT2 pretrained model (https://huggingface.co/danghuy1999/gpt2-viwiki)
## Purpose
This model was made only for fun and experimental study
## Dataset
The dataset is about 10k lines of Vietnamese Six Eight poems
## Result
- Train Loss: 2.7
- Val loss: 4.5
## How to use
You can use this model to generate Six Eight poems given any starting words
## Example
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained("tuanle/GPT2_Poet")
model = AutoModelForCausalLM.from_pretrained("tuanle/GPT2_Poet").to(device)
text = "hỏi rằng nàng"
input_ids = tokenizer.encode(text, return_tensors='pt').to(device)
min_length = 60
max_length = 100
sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id,
do_sample=True,
max_length=max_length,
min_length=min_length,
# temperature = .8,
# top_k= 100,
top_p = 0.8,
num_beams= 10,
# early_stopping=True,
no_repeat_ngram_size= 2,
num_return_sequences= 3)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist(), skip_special_tokens=True)))
print('\n---')
```
## Demo
- Input: "hỏi rằng nàng"
- Output:
hỏi rằng nàng đã nói ra\
cớ sao nàng lại hỏi han sự tình\
vân tiên nói lại những lời\
thưa rằng ở chốn am mây một mình\
từ đây mới biết rõ ràng\
ở đây cũng gặp một người ở đây\
hai người gặp lại gặp nhau\
thấy lời nàng mới hỏi tra việc này\
nguyệt nga hỏi việc bấy lâu\
khen rằng đạo sĩ ở đầu cửa thiền
|
uclanlp/plbart-multi_task-java | 71738be2ce75c6a356895240fe1d537a0404043f | 2022-03-02T07:30:20.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-java | 3 | null | transformers | 21,819 | Entry not found |
uclanlp/plbart-multi_task-js | 88e012e9cc679417b070aceb2e23607cd1a93383 | 2022-03-02T07:36:12.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-js | 3 | null | transformers | 21,820 | Entry not found |
uclanlp/plbart-multi_task-php | 6395d2f96578813278e779175c528c3079459a0f | 2022-03-02T07:35:07.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-php | 3 | null | transformers | 21,821 | Entry not found |
uclanlp/plbart-refine-java-medium | 983c1893a59368d197c76e9e6853d20aec502a15 | 2021-11-09T17:09:39.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-refine-java-medium | 3 | null | transformers | 21,822 | Entry not found |
uclanlp/plbart-single_task-compiled-generation | 317f0835a07250d20628e4533d6f5ff3a248d2a7 | 2022-03-02T07:14:36.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-compiled-generation | 3 | null | transformers | 21,823 | Entry not found |
uclanlp/plbart-single_task-static-generation | 5d23be3ecdb7d6060397b4ba716c1f9366eae5c2 | 2022-03-02T07:24:25.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-static-generation | 3 | null | transformers | 21,824 | Entry not found |
uer/chinese_roberta_L-10_H-256 | 9f86193333a3e81ec9805f9233cdc38335149995 | 2022-07-15T08:14:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-10_H-256 | 3 | null | transformers | 21,825 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v1 | e92ded294cbc84ba142cdff396f50847f18b3c1c | 2022-01-05T21:29:59.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"miniLM",
"tensorflow",
"pt-br",
"license:mit"
] | text-classification | false | unicamp-dl | null | unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v1 | 3 | 1 | transformers | 21,826 | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-en-pt-msmarco-v1 is a multilingual miniLM-based model finetuned on a bilingual version of MS MARCO passage dataset. This bilingual dataset version is formed by the original MS MARCO dataset (in English) and a Portuguese translated version. In the version v1, the Portuguese dataset was translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-en-pt-msmarco-v1, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
unicamp-dl/mt5-base-en-msmarco | 05c873465556bfeef72477d3488d12fc63bcc8ce | 2022-01-05T21:30:58.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"t5",
"tensorflow",
"en",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | unicamp-dl | null | unicamp-dl/mt5-base-en-msmarco | 3 | null | transformers | 21,827 | ---
language: pt
license: mit
tags:
- msmarco
- t5
- pytorch
- tensorflow
- en
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mt5-base Reranker finetuned on MS MARCO
## Introduction
mT5-base-en-msmarco-v1 is a mT5-based model finetuned on English MS MARCO passage dataset.
Further information about the dataset or the translation method can be found on our paper [**mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, MT5ForConditionalGeneration
model_name = 'unicamp-dl/mt5-base-en-msmarco'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use mT5-base-en-msmarco, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
valeriazen/ruT5-base-finetuned-plenka-chatbot | 40bb5fd3f448f1ba5c97ec0f0814d2a42b4ceef7 | 2022-01-18T20:12:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | valeriazen | null | valeriazen/ruT5-base-finetuned-plenka-chatbot | 3 | null | transformers | 21,828 | Entry not found |
valhalla/awesome-model_v3 | fbf3c3e6f82fc507ad7c37f3b65bce790f55d80b | 2022-02-01T16:42:00.000Z | [
"pytorch",
"awesome",
"transformers"
] | null | false | valhalla | null | valhalla/awesome-model_v3 | 3 | null | transformers | 21,829 | Entry not found |
valhalla/distilt5-qg-hl-12-6 | 16c582cf90d9ad1284d606edba4d0efa50da1621 | 2021-09-23T16:42:49.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:squad",
"transformers",
"question-generation",
"distilt5",
"distilt5-qg",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/distilt5-qg-hl-12-6 | 3 | null | transformers | 21,830 | ---
datasets:
- squad
tags:
- question-generation
- distilt5
- distilt5-qg
widget:
- text: <hl> 42 <hl> is the answer to life, the universe and everything. </s>
- text: Python is a programming language. It is developed by <hl> Guido Van Rossum
<hl>. </s>
- text: Although <hl> practicality <hl> beats purity </s>
license: mit
---
## DistilT5 for question-generation
This is distilled version of [t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens.
The model is distilled using the **No Teacher Distillation** method proposed by Huggingface, [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#distilbart).
We just copy alternating layers from `t5-base-qg-hl` and finetune more on the same data. Following table lists other distilled models and their metrics.
| Name | BLEU-4 | METEOR | ROUGE-L | QA-EM | QA-F1 |
|---------------------------------------------------------------------------------|---------|---------|---------|--------|--------|
| [distilt5-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qg-hl-6-4) | 18.4141 | 24.8417 | 40.3435 | - | - |
| [distilt5-qa-qg-hl-6-4](https://huggingface.co/valhalla/distilt5-qa-qg-hl-6-4) | 18.6493 | 24.9685 | 40.5605 | 76.13 | 84.659 |
| [distilt5-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qg-hl-12-6) | 20.5275 | 26.5010 | 43.2676 | - | - |
| [distilt5-qa-qg-hl-12-6](https://huggingface.co/valhalla/distilt5-qa-qg-hl-12-6)| 20.6109 | 26.4533 | 43.0895 | 81.61 | 89.831 |
You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens. For example
`<hl> 42 <hl> is the answer to life, the universe and everything.`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
### Model in action 🚀
You'll need to clone the [repo](https://github.com/patil-suraj/question_generation).
[](https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb)
```python3
from pipelines import pipeline
nlp = pipeline("question-generation", model="valhalla/distilt5-qg-hl-12-6")
nlp("42 is the answer to life, universe and everything.")
=> [{'answer': '42', 'question': 'What is the answer to life, the universe and everything?'}]
``` |
valhalla/s2t_librispeech_medium | 95bdd43322652d60eb2af39f9abab7bf3fdecea7 | 2021-02-26T14:24:39.000Z | [
"pytorch",
"speech_to_text_transformer",
"text2text-generation",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"automatic-speech-recognition",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/s2t_librispeech_medium | 3 | null | transformers | 21,831 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
TODO: [To be filled]
## Evaluation on LibriSpeech Test
The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset.
```python
from datasets import load_dataset
from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer
import soundfile as sf
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_medium").to("cuda")
tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_medium", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.5 | 7.8 | |
valhalla/t5-base-cnn-fp6-test | f5cd2b7cd9c1ff7d848b0cc5f02c908c0fc4d5a7 | 2021-01-08T16:02:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | valhalla | null | valhalla/t5-base-cnn-fp6-test | 3 | null | transformers | 21,832 | This model is uploaded for testing purpose
|
valurank/distilbert-allsides | 66f40ec76aca1233fd16fb2c7e2f0d5996e111e2 | 2022-06-08T20:21:18.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-classification | false | valurank | null | valurank/distilbert-allsides | 3 | null | transformers | 21,833 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: distilbert-allsides
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-allsides
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9138
- Acc: 0.7094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7667 | 1.0 | 822 | 0.7003 | 0.6820 |
| 0.6893 | 2.0 | 1644 | 0.6619 | 0.6981 |
| 0.6177 | 3.0 | 2466 | 0.6736 | 0.7064 |
| 0.595 | 4.0 | 3288 | 0.6642 | 0.7091 |
| 0.5179 | 5.0 | 4110 | 0.6936 | 0.7121 |
| 0.4698 | 6.0 | 4932 | 0.7670 | 0.7106 |
| 0.463 | 7.0 | 5754 | 0.8537 | 0.7121 |
| 0.4345 | 8.0 | 6576 | 0.9138 | 0.7094 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vasudevgupta/bigbird-base-trivia-itc | 5eeeba15d560c949ab582514256d7d62f6a659de | 2021-04-30T07:35:44.000Z | [
"pytorch",
"big_bird",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | vasudevgupta | null | vasudevgupta/bigbird-base-trivia-itc | 3 | null | transformers | 21,834 | Moved here: https://huggingface.co/google/bigbird-base-trivia-itc |
vasudevgupta/dl-hack-distilgpt2 | 47667f83e258d896f6f9a1e9cc678e019fae3b23 | 2021-05-23T13:29:37.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | vasudevgupta | null | vasudevgupta/dl-hack-distilgpt2 | 3 | null | transformers | 21,835 | DL research papers **Title -> abstract**
**Using this model**
```python
from transformers import pipeline, GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("vasudevgupta/dl-hack-distilgpt2")
model = GPT2LMHeadModel.from_pretrained("vasudevgupta/dl-hack-distilgpt2")
agent = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(agent("An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", max_length=200))
``` |
vasudevgupta/dl-hack-pegasus-large | d518c6e46d6c319417344f67b9459498a4b787dd | 2021-04-30T07:33:27.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vasudevgupta | null | vasudevgupta/dl-hack-pegasus-large | 3 | null | transformers | 21,836 | Deep Learning research papers **Title -> abstract** |
vennify/t5-example-upload | 1d51385df5f128b2bc05c41c1a6fb9d3e0ace283 | 2021-08-16T20:58:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vennify | null | vennify/t5-example-upload | 3 | null | transformers | 21,837 | Entry not found |
vesteinn/IceBERT-finetuned-iec-sentence-bs16 | bd258f4674116cea83b7639860de49920fad4368 | 2021-11-05T20:49:21.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index"
] | text-classification | false | vesteinn | null | vesteinn/IceBERT-finetuned-iec-sentence-bs16 | 3 | null | transformers | 21,838 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: IceBERT-finetuned-iec-sentence-bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-iec-sentence-bs16
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2508
- Matthews Correlation: 0.8169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.5278 | 1.0 | 3640 | 0.4777 | 0.5396 |
| 0.4648 | 2.0 | 7280 | 0.3886 | 0.6437 |
| 0.3807 | 3.0 | 10920 | 0.3478 | 0.7060 |
| 0.3061 | 4.0 | 14560 | 0.2523 | 0.8083 |
| 0.2477 | 5.0 | 18200 | 0.2508 | 0.8169 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
vesteinn/icelandic-weather-summarization | 31a5a71b53a63876eaad873a08eea2ba01ceb2af | 2021-11-28T11:56:15.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vesteinn | null | vesteinn/icelandic-weather-summarization | 3 | null | transformers | 21,839 | Temporary upload - student project |
vidhur2k/mBERT-Italian-Mono | af0584b6fcfa80a1f4f300d99f838eef72f9c856 | 2021-12-03T18:31:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vidhur2k | null | vidhur2k/mBERT-Italian-Mono | 3 | null | transformers | 21,840 | Entry not found |
vinaydngowda/Robertabase_Ana4 | 5d7d0e2351d3b8b029edf616f51910ec867e65a3 | 2022-01-12T20:12:16.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:vinaydngowda/autonlp-data-case-classify-xlnet",
"transformers",
"autonlp",
"co2_eq_emissions"
] | text-classification | false | vinaydngowda | null | vinaydngowda/Robertabase_Ana4 | 3 | null | transformers | 21,841 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- vinaydngowda/autonlp-data-case-classify-xlnet
co2_eq_emissions: 19.964760910364927
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 496213536
- CO2 Emissions (in grams): 19.964760910364927
## Validation Metrics
- Loss: 0.7149562835693359
- Accuracy: 0.8092592592592592
- Macro F1: 0.8085189591849891
- Micro F1: 0.8092592592592593
- Weighted F1: 0.8085189591849888
- Macro Precision: 0.8137745564384112
- Micro Precision: 0.8092592592592592
- Weighted Precision: 0.8137745564384112
- Macro Recall: 0.8092592592592592
- Micro Recall: 0.8092592592592592
- Weighted Recall: 0.8092592592592592
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/vinaydngowda/autonlp-case-classify-xlnet-496213536
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
viniaraujoo/bert_transparencia_brasil | d3bc4d02767a3ec5a9e5d63fac3d8685b144d5bc | 2021-07-15T22:23:20.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | viniaraujoo | null | viniaraujoo/bert_transparencia_brasil | 3 | null | transformers | 21,842 | Entry not found |
viniaraujoo/transparencia_brasil_binario | 7dc21926c350fd35fd2317dd17dca468f3ac6d5f | 2021-08-02T17:11:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | viniaraujoo | null | viniaraujoo/transparencia_brasil_binario | 3 | null | transformers | 21,843 | Entry not found |
violentometro/violentometro-model | df01854ae31096ff657e0343e452a2080e89c871 | 2021-09-21T04:29:32.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | violentometro | null | violentometro/violentometro-model | 3 | null | transformers | 21,844 | Entry not found |
vishalz/paraphrase_model2 | 7eab498d5bcaaba839f16f602edb4e83100b57ba | 2021-09-23T13:54:55.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | vishalz | null | vishalz/paraphrase_model2 | 3 | null | transformers | 21,845 | Entry not found |
vittoriomaggio/bert-base-msmarco-fiqa | 3627461a730ff51d1ca3ff33b3fba9ddb47a16f1 | 2022-02-13T09:15:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vittoriomaggio | null | vittoriomaggio/bert-base-msmarco-fiqa | 3 | null | transformers | 21,846 | Entry not found |
vittoriomaggio/msmarco-distilbert-base-v2-fiqa | b59d94bb76dbdb197cbfdae1795ca241689a112c | 2022-02-11T11:47:39.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | vittoriomaggio | null | vittoriomaggio/msmarco-distilbert-base-v2-fiqa | 3 | null | transformers | 21,847 | Entry not found |
vmicheli/lm-butlers-gpt | 9e163dbbc07381228097edc8411bf97843f372b0 | 2021-05-23T13:37:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"arxiv:2104.07972",
"transformers"
] | text-generation | false | vmicheli | null | vmicheli/lm-butlers-gpt | 3 | null | transformers | 21,848 | GPT model developed in [Language Models are Few-Shot Butlers](https://arxiv.org/abs/2104.07972). |
voidful/bart_base_squad_ca_q | 3d5a00e1665c72c3a96d7a28075b43dc6e322cbf | 2021-07-04T16:29:48.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/bart_base_squad_ca_q | 3 | null | transformers | 21,849 | Entry not found |
voidful/tts_hubert_m2m100 | 8976b4ea99c91e8d4a2b728de9aaed6c75c6b57d | 2021-12-02T10:05:18.000Z | [
"pytorch",
"m2m_100",
"feature-extraction",
"transformers"
] | feature-extraction | false | voidful | null | voidful/tts_hubert_m2m100 | 3 | null | transformers | 21,850 | Entry not found |
w11wo/javanese-bert-small | 45a3d2fe25d194d3103b326557f618dddaf44d95 | 2022-02-14T16:19:09.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"jv",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"javanese-bert-small",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/javanese-bert-small | 3 | null | transformers | 21,851 | ---
language: jv
tags:
- javanese-bert-small
license: mit
datasets:
- wikipedia
widget:
- text: "Aku mangan sate ing [MASK] bareng konco-konco"
---
## Javanese BERT Small
Javanese BERT Small is a masked language model based on the [BERT model](https://arxiv.org/abs/1810.04805). It was trained on the latest (late December 2020) Javanese Wikipedia articles.
The model was originally HuggingFace's pretrained [English BERT model](https://huggingface.co/bert-base-uncased) and is later fine-tuned on the Javanese dataset. It achieved a perplexity of 22.00 on the validation dataset (20% of the articles). Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger), and [fine-tuning tutorial notebook](https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb) written by [Pierre Guillou](https://huggingface.co/pierreguillou).
Hugging Face's [Transformers](https://huggingface.co/transformers) library was used to train the model -- utilizing the base BERT model and their `Trainer` class. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-----------------------|----------|----------------|-------------------------------------|
| `javanese-bert-small` | 110M | BERT Small | Javanese Wikipedia (319 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 3.116 | 3.091 | 22.00 | 2:7:42 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-bert-small"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import BertModel, BertTokenizerFast
pretrained_name = "w11wo/javanese-bert-small"
model = BertModel.from_pretrained(pretrained_name)
tokenizer = BertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do remember that although the dataset originated from Wikipedia, the model may not always generate factual texts. Additionally, the biases which came from the Wikipedia articles may be carried over into the results of this model.
## Author
Javanese BERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/javanese-gpt2-small-imdb-classifier | 92bb533334b3dbba65564e65a0dfbcc4a04e5579 | 2022-02-14T16:18:19.000Z | [
"pytorch",
"tf",
"gpt2",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"transformers",
"javanese-gpt2-small-imdb-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/javanese-gpt2-small-imdb-classifier | 3 | null | transformers | 21,852 | ---
language: jv
tags:
- javanese-gpt2-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Film sing apik banget!"
---
## Javanese GPT-2 Small IMDB Classifier
Javanese GPT-2 Small IMDB Classifier is a movie-classification model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-gpt2-small-imdb`](https://huggingface.co/w11wo/javanese-gpt2-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 76.70% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|---------------------------------------|----------|-----------------|---------------------------------|
| `javanese-gpt2-small-imdb-classifier` | 124M | GPT-2 Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|-------------|
| 0.324 | 0.574 | 0.767 | 2:0:14 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-gpt2-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese GPT-2 Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/javanese-roberta-small-imdb-classifier | e5218bb8a88a270b1ff8b28cd1857bb60a8ec1ef | 2022-02-14T16:19:37.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1907.11692",
"transformers",
"javanese-roberta-small-imdb-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/javanese-roberta-small-imdb-classifier | 3 | null | transformers | 21,853 | ---
language: jv
tags:
- javanese-roberta-small-imdb-classifier
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku bakal menehi rating film iki 1 bintang."
---
## Javanese RoBERTa Small IMDB Classifier
Javanese RoBERTa Small IMDB Classifier is a movie-classification model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on Javanese IMDB movie reviews.
The model was originally [`w11wo/javanese-roberta-small-imdb`](https://huggingface.co/w11wo/javanese-roberta-small-imdb) which is then fine-tuned on the [`w11wo/imdb-javanese`](https://huggingface.co/datasets/w11wo/imdb-javanese) dataset consisting of Javanese IMDB movie reviews. It achieved an accuracy of 77.70% on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|------------------------------------------|---------|------------------|---------------------------------|
| `javanese-roberta-small-imdb-classifier` | 124M | RoBERTa Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | accuracy | total time |
|------------|------------|------------|-------------|
| 0.281 | 0.593 | 0.777 | 1:48:31 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-roberta-small-imdb-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Film sing apik banget!")
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese RoBERTa Small IMDB Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/javanese-roberta-small-imdb | 6eebc1eafad67f04403e55e15098a63ef84a9e4c | 2022-02-14T16:17:51.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"jv",
"dataset:w11wo/imdb-javanese",
"arxiv:1907.11692",
"transformers",
"javanese-roberta-small-imdb",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | w11wo | null | w11wo/javanese-roberta-small-imdb | 3 | null | transformers | 21,854 | ---
language: jv
tags:
- javanese-roberta-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Aku bakal menehi rating film iki 5 <mask>."
---
## Javanese RoBERTa Small IMDB
Javanese RoBERTa Small IMDB is a masked language model based on the [RoBERTa model](https://arxiv.org/abs/1907.11692). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese RoBERTa Small model](https://huggingface.co/w11wo/javanese-roberta-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 20.83 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|-------------------------------|----------|-------------------|---------------------------------|
| `javanese-roberta-small-imdb` | 124M | RoBERTa Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.140 | 3.036 | 20.83 | 2:59:28 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-roberta-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing <mask> bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/javanese-roberta-small-imdb"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese RoBERTa Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
w11wo/sundanese-bert-base-emotion-classifier | 1559848cfe7e72ee1e1b767dab3d4d7971b607d9 | 2022-02-26T13:15:42.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"su",
"arxiv:1810.04805",
"transformers",
"sundanese-bert-base-emotion-classifier",
"license:mit"
] | text-classification | false | w11wo | null | w11wo/sundanese-bert-base-emotion-classifier | 3 | null | transformers | 21,855 | ---
language: su
tags:
- sundanese-bert-base-emotion-classifier
license: mit
widget:
- text: "Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah"
---
## Sundanese BERT Base Emotion Classifier
Sundanese BERT Base Emotion Classifier is an emotion-text-classification model based on the [BERT](https://arxiv.org/abs/1810.04805) model. The model was originally the pre-trained [Sundanese BERT Base Uncased](https://hf.co/luche/bert-base-sundanese-uncased) model trained by [`@luche`](https://hf.co/luche), which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 96.82% and F1-macro of 96.75%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | --------- | ------------------------------- |
| `sundanese-bert-base-emotion-classifier` | 110M | BERT Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.759800 | 0.263913 | 0.924603 | 0.925042 | 0.928426 | 0.926130 |
| 2 | 0.213100 | 0.456022 | 0.908730 | 0.906732 | 0.924141 | 0.907846 |
| 3 | 0.091900 | 0.204323 | 0.956349 | 0.955896 | 0.956226 | 0.956248 |
| 4 | 0.043800 | 0.219143 | 0.956349 | 0.955705 | 0.955848 | 0.956392 |
| 5 | 0.013700 | 0.247289 | 0.960317 | 0.959734 | 0.959477 | 0.960782 |
| 6 | 0.004800 | 0.286636 | 0.956349 | 0.955540 | 0.956519 | 0.956615 |
| 7 | 0.000200 | 0.243408 | 0.960317 | 0.959085 | 0.959145 | 0.959310 |
| 8 | 0.001500 | 0.232138 | 0.960317 | 0.959451 | 0.959427 | 0.959997 |
| 9 | 0.000100 | 0.215523 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
| 10 | 0.000100 | 0.216533 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-bert-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah")
```
## Disclaimer
Do consider the biases which come from both the pre-trained BERT model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese BERT Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
weizhen/prophetnet-large-uncased-squad-qg | 1dc2b39111484cf90d0306c46740180af7a41958 | 2020-10-20T18:25:13.000Z | [
"pytorch",
"prophetnet",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | weizhen | null | weizhen/prophetnet-large-uncased-squad-qg | 3 | null | transformers | 21,856 | Entry not found |
wesam266/wav2vec2-xls-r-300m_all_ds_v1 | 64ad8a5d8ecf863581476aa5747a4bf7c4764fa5 | 2022-01-22T11:44:56.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | wesam266 | null | wesam266/wav2vec2-xls-r-300m_all_ds_v1 | 3 | null | transformers | 21,857 | Entry not found |
wilsontam/bert-base-uncased-dstc10-kb-title-body-validate | f934fcc86884bca07f18c14f98f41c32cb88f48e | 2021-12-26T04:16:02.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"transformers",
"dstc10",
"knowledge title-body validation"
] | text-classification | false | wilsontam | null | wilsontam/bert-base-uncased-dstc10-kb-title-body-validate | 3 | null | transformers | 21,858 | ---
language: "en"
tags:
- dstc10
- knowledge title-body validation
widget:
- text: "Can you accommodate large groups? It does not offer free WiFi."
- text: "Is there a gym on site? It does not have an onsite fitness center."
---
This is the model used for knowledge clustering where we feed title-body pair and the classifier predicts if the pair is valid or not.
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository.
Credit: Jiakai Zou, Wilson Tam
---
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
def single_test(tokenizer, title_body_pair):
result = tokenizer([title_body_pair], return_tensors="pt")
model.eval()
outputs = model(**result)
predictions = outputs.logits.argmax(dim=-1)
# There was a mistake in flipping the labels.
return True if predictions == 0 else False
if __name__ == '__main__':
model_name = "wilsontam/bert-base-uncased-dstc10-kb-title-body-validate"
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(".")
sentence = "Can I check in anytime?"
body = "Yes, 24 Hours Front Desk Avaliable."
print(single_test((sentence, body))) # Expect: True
``` |
woolee/fine_tuned_example_model | a62c70ff1bfb06f32b779f60e3ff63c3ee8813a2 | 2021-11-04T08:02:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | woolee | null | woolee/fine_tuned_example_model | 3 | null | transformers | 21,859 | Entry not found |
xhyi/PT_GPTNEO1300_Delish_v6 | 55e8ba2cc0879c4a188fb5b670f8569da9e8aa87 | 2021-09-02T22:29:48.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | xhyi | null | xhyi/PT_GPTNEO1300_Delish_v6 | 3 | null | transformers | 21,860 |
# Delish v6 (GPT-Neo 1.3B)
This model is from the DelishBot project.
|
yazdipour/text-to-sparql-t5-base-2021-10-17_23-40 | dd15c54db1ba75e4ae96568ab7df9d9a0a67d5fc | 2021-10-18T02:23:08.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | yazdipour | null | yazdipour/text-to-sparql-t5-base-2021-10-17_23-40 | 3 | null | transformers | 21,861 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-17_23-40
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2649857699871063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-17_23-40
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2645
- Gen Len: 19.0
- P: 0.5125
- R: 0.0382
- F1: 0.2650
- Score: 5.1404
- Bleu-precisions: [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422]
- Bleu-bp: 0.0707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3513 | 1.0 | 4807 | 0.2645 | 19.0 | 0.5125 | 0.0382 | 0.2650 | 5.1404 | [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422] | 0.0707 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yerevann/m3-gen-only-generator | 38877cfbb8817ebe235ba33bb2148c395c4b37ee | 2020-05-04T13:37:40.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | yerevann | null | yerevann/m3-gen-only-generator | 3 | null | transformers | 21,862 | Entry not found |
yhavinga/t5-base-dutch | d7a21f82598ac5f282101c433438393435209dbb | 2022-06-14T10:28:36.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"arxiv:1910.10683",
"arxiv:2109.10686",
"transformers",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | yhavinga | null | yhavinga/t5-base-dutch | 3 | null | transformers | 21,863 | ---
language:
- nl
datasets:
- yhavinga/mc4_nl_cleaned
tags:
- t5
- seq2seq
inference: false
license: apache-2.0
---
# t5-base-dutch
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
& [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google, for the project [Pre-train T5 from scratch in Dutch](https://discuss.huggingface.co/t/pretrain-t5-from-scratch-in-dutch/8109).
See also the fine-tuned [t5-base-dutch-demo](https://huggingface.co/flax-community/t5-base-dutch-demo) model,
and the demo application **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)**,
that are based on this model.
**5 jan 2022: Model updated. Evaluation accuracy increased from 0.64 to 0.70.**
**11 jan 2022: See also [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) with eval acc 0.78**
This **t5** model has **222M** parameters.
It was pre-trained with the masked language modeling objective on the dataset
`mc4_nl_cleaned` config `full` for **1** epoch(s) and a duration of **2d9h**,
with a sequence length of **512**, batch size **128** and **527500** total steps (**35B** tokens).
Pre-training evaluation loss and accuracy are **1,38** and **0,70**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
* Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off.
* For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture
and configs, though it must be noted that this model (t5-base-dutch) is unrelated to these projects and not an 'official' checkpoint.
* **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*.
* **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch mc4 with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been evaluated on summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

The next two sections provide more information on how the evaluation was performed.
## Evaluation on summarization
The models below have been evaluated for summarization on 50K samples from the CNN Dailymail dataset.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 1e-3 after a
warmup of 32 steps, with a label smoothing factor of 0.05. Article and summary token lengths were set to 1024 and 142.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
The numbers reported are the Rouge scores on 1000 documents from the test split. The rouge1 score is visualized in the
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
## Evaluation on translation
The models below have been evaluated for English to Dutch translation on 50K samples from the CCMatrix dataset.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
All models were fine-tuned with the AdamW optimizer with a batch size of 128 and constant learning rate of 5e-5 after a
warmup of 32 steps, with a label smoothing factor of 0.1 and maximum sequence length of 128 tokens.
The numbers reported are the Bleu scores on 1000 documents from the test split.
NB: the evaluation checkpoints are not saved, since they were trained for comparison of pre-trained models only.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
yokonav/xlm-roberta-base-finetuned-marc-en | f8294fdfde1abcbcc0fa6476e1611fc3632d7bc2 | 2021-10-22T13:36:59.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | yokonav | null | yokonav/xlm-roberta-base-finetuned-marc-en | 3 | null | transformers | 21,864 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9177
- Mae: 0.4756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.136 | 1.0 | 235 | 0.9515 | 0.4756 |
| 0.9724 | 2.0 | 470 | 0.9177 | 0.4756 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
yoshitomo-matsubara/bert-base-uncased-mnli_from_bert-large-uncased-mnli | eca7edf0862d0a745e2029cde6a43a24736b9d67 | 2021-06-03T05:02:16.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mnli",
"dataset:ax",
"transformers",
"mnli",
"ax",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-mnli_from_bert-large-uncased-mnli | 3 | null | transformers | 21,865 | ---
language: en
tags:
- bert
- mnli
- ax
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on MNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-large-uncased-qqp | d62bf3d71b6e402c1b1c307982763ea4bcbd0850 | 2021-05-29T21:33:37.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"qqp",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-large-uncased-qqp | 3 | null | transformers | 21,866 | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
youngfan918/bert_cn_finetuning | 2f761306fb4b2be56feeb413ebe8dace13ea2c81 | 2021-05-20T09:33:15.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | youngfan918 | null | youngfan918/bert_cn_finetuning | 3 | null | transformers | 21,867 | Entry not found |
youngfan918/bert_finetuning_test | 424629521b5887749ce0a18a6e6d52d2cae029df | 2021-05-20T09:34:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | youngfan918 | null | youngfan918/bert_finetuning_test | 3 | null | transformers | 21,868 | Entry not found |
ytlin/16l3xf7a_1 | 7c1fe62a60955cfc8a50fd4602d76f5970efee78 | 2021-05-23T13:47:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/16l3xf7a_1 | 3 | null | transformers | 21,869 | Entry not found |
ytlin/18ygyqcn_4 | d759b90378b7d4342641218a8b068153d0c0b46f | 2021-05-23T13:48:01.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ytlin | null | ytlin/18ygyqcn_4 | 3 | null | transformers | 21,870 | Entry not found |
ytlin/2sk5p244 | 5e7cb3a6bbd8e4684e474fe9827fe57ddc2e80eb | 2020-10-06T06:38:22.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/2sk5p244 | 3 | null | transformers | 21,871 | Entry not found |
ytlin/31r11ahz_2 | bd8a5617990e8385529bf1b46a3667828b2d419e | 2020-10-04T10:44:59.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ytlin | null | ytlin/31r11ahz_2 | 3 | null | transformers | 21,872 | Entry not found |
yxchar/tlm-imdb-large-scale | d1d97e4b66c6a42842e53c10be563ec6958b47be | 2021-11-04T10:08:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-imdb-large-scale | 3 | null | transformers | 21,873 | Entry not found |
yxchar/tlm-sciie-small-scale | c3786965383c66a33d3de50b248da88ef7e621cd | 2021-11-04T17:27:08.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | yxchar | null | yxchar/tlm-sciie-small-scale | 3 | null | transformers | 21,874 | Entry not found |
zharry29/goal_benchmark_bert | 7b9b669435ea00252ba1c693202e09a67197e846 | 2021-05-20T09:42:25.000Z | [
"pytorch",
"jax",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/goal_benchmark_bert | 3 | null | transformers | 21,875 | Entry not found |
zharry29/intent_enwh_rl | f89f1b1f4cf6ba0c6bc02c20e46dcca9a2274b54 | 2020-09-16T20:10:41.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_enwh_rl | 3 | null | transformers | 21,876 | Entry not found |
zharry29/intent_fb-en_id_rl | 155cd162f8c087c3c8db69fe2f6bddb0831496cf | 2021-05-20T23:27:13.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-en_id_rl | 3 | null | transformers | 21,877 | Entry not found |
zharry29/intent_fb-es_wh_id | 0650003d4616e0dd46c3fd3b932065add01afa16 | 2020-09-16T20:15:03.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-es_wh_id | 3 | null | transformers | 21,878 | Entry not found |
zharry29/intent_fb-th_enwh_id | 98d7571c63b2591150ed6e22cc905644bc9ee6d3 | 2020-09-16T20:15:38.000Z | [
"pytorch",
"xlm-roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/intent_fb-th_enwh_id | 3 | null | transformers | 21,879 | Entry not found |
zharry29/step_benchmark_bert | 120d6fc3940fb7b273379b569bf087a3ef8a4c17 | 2021-05-20T09:44:40.000Z | [
"pytorch",
"jax",
"bert",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/step_benchmark_bert | 3 | null | transformers | 21,880 | Entry not found |
zharry29/step_benchmark_roberta | fea72463b57d4d73f5352c1dccec5522785b31ac | 2021-05-20T23:52:30.000Z | [
"pytorch",
"jax",
"roberta",
"multiple-choice",
"transformers"
] | multiple-choice | false | zharry29 | null | zharry29/step_benchmark_roberta | 3 | null | transformers | 21,881 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-first-classification-t | 8ce49546883879389c4bf39434504b8f6bcddbae | 2021-08-11T14:09:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-first-classification-t | 3 | null | transformers | 21,882 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-first-classification | 5ecfd13c956af85c5cfea0686964d08b68c3aef8 | 2021-08-11T10:37:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-first-classification | 3 | null | transformers | 21,883 | Entry not found |
zhuqing/bert-base-uncased-mumsnet-pf-all_classification | 4c47c05c54284a9315f79e8fb68c17c107bad557 | 2021-08-14T18:38:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | zhuqing | null | zhuqing/bert-base-uncased-mumsnet-pf-all_classification | 3 | null | transformers | 21,884 | Entry not found |
zhuqing/bert-large-whole-uncased-exp2-feminist | 801c8f25de05eb0ec0f9b4c9f00da54141d7d763 | 2021-08-29T09:12:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-large-whole-uncased-exp2-feminist | 3 | null | transformers | 21,885 | Entry not found |
zhuqing/bert-large-whole-uncased-exp3-parent-nointersection | d5371b54536d6030ed131612f342528730226259 | 2021-08-29T14:09:49.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/bert-large-whole-uncased-exp3-parent-nointersection | 3 | null | transformers | 21,886 | Entry not found |
zhuqing/distilbert-base-uncased-netmums-parent | ab2ecdc0a88fb4ab4982217d0b8910be5b0a8e07 | 2021-08-20T06:46:05.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/distilbert-base-uncased-netmums-parent | 3 | null | transformers | 21,887 | Entry not found |
zhuqing/roberta-base-uncased-netmums-classification-intersection-2 | 5f34d5eaf7ad47170bc44924d667e9093ce66141 | 2021-08-23T18:59:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | zhuqing | null | zhuqing/roberta-base-uncased-netmums-classification-intersection-2 | 3 | null | transformers | 21,888 | Entry not found |
zhuqing/roberta-base-uncased-parent-intersection | b4e552ebd7afed9f2a1eec92c8ad25ac7eea837f | 2021-08-23T06:48:09.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zhuqing | null | zhuqing/roberta-base-uncased-parent-intersection | 3 | null | transformers | 21,889 | Entry not found |
zloelias/bert-base-uncased-kinopoisk-reviews-finetuned-clf | f188c61772e1d3db4ffcbe0d39dcf7c8f6327583 | 2021-12-01T16:29:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | zloelias | null | zloelias/bert-base-uncased-kinopoisk-reviews-finetuned-clf | 3 | null | transformers | 21,890 | Entry not found |
zqf03118/bert_cn_finetuning | 0a634bdffc7ae503a327c2219782cf5ac538709a | 2021-05-20T09:55:48.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | zqf03118 | null | zqf03118/bert_cn_finetuning | 3 | null | transformers | 21,891 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-ca | 8861408789d83268eb4c11673f2e5278a9d98e63 | 2022-02-25T09:58:08.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ca",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-ca | 3 | null | transformers | 21,892 |
---
language:
- ca
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-ca
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 86.3
- type: accuracy
name: Dutch Test accuracy
value: 87.2
- type: accuracy
name: German Test accuracy
value: 79.2
- type: accuracy
name: Italian Test accuracy
value: 90.2
- type: accuracy
name: French Test accuracy
value: 90.7
- type: accuracy
name: Spanish Test accuracy
value: 94.8
- type: accuracy
name: Russian Test accuracy
value: 89.1
- type: accuracy
name: Swedish Test accuracy
value: 89.5
- type: accuracy
name: Norwegian Test accuracy
value: 84.7
- type: accuracy
name: Danish Test accuracy
value: 89.3
- type: accuracy
name: Low Saxon Test accuracy
value: 53.3
- type: accuracy
name: Akkadian Test accuracy
value: 41.0
- type: accuracy
name: Armenian Test accuracy
value: 84.7
- type: accuracy
name: Welsh Test accuracy
value: 66.0
- type: accuracy
name: Old East Slavic Test accuracy
value: 77.4
- type: accuracy
name: Albanian Test accuracy
value: 79.2
- type: accuracy
name: Slovenian Test accuracy
value: 79.1
- type: accuracy
name: Guajajara Test accuracy
value: 32.9
- type: accuracy
name: Kurmanji Test accuracy
value: 78.2
- type: accuracy
name: Turkish Test accuracy
value: 76.2
- type: accuracy
name: Finnish Test accuracy
value: 84.7
- type: accuracy
name: Indonesian Test accuracy
value: 84.5
- type: accuracy
name: Ukrainian Test accuracy
value: 87.5
- type: accuracy
name: Polish Test accuracy
value: 87.4
- type: accuracy
name: Portuguese Test accuracy
value: 91.4
- type: accuracy
name: Kazakh Test accuracy
value: 80.6
- type: accuracy
name: Latin Test accuracy
value: 79.3
- type: accuracy
name: Old French Test accuracy
value: 66.5
- type: accuracy
name: Buryat Test accuracy
value: 62.8
- type: accuracy
name: Kaapor Test accuracy
value: 27.5
- type: accuracy
name: Korean Test accuracy
value: 61.6
- type: accuracy
name: Estonian Test accuracy
value: 87.2
- type: accuracy
name: Croatian Test accuracy
value: 88.8
- type: accuracy
name: Gothic Test accuracy
value: 29.1
- type: accuracy
name: Swiss German Test accuracy
value: 42.1
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 41.0
- type: accuracy
name: Naija Test accuracy
value: 40.3
- type: accuracy
name: Latvian Test accuracy
value: 85.0
- type: accuracy
name: Chinese Test accuracy
value: 32.3
- type: accuracy
name: Tagalog Test accuracy
value: 72.5
- type: accuracy
name: Bambara Test accuracy
value: 29.8
- type: accuracy
name: Lithuanian Test accuracy
value: 84.1
- type: accuracy
name: Galician Test accuracy
value: 88.8
- type: accuracy
name: Vietnamese Test accuracy
value: 65.2
- type: accuracy
name: Greek Test accuracy
value: 85.9
- type: accuracy
name: Catalan Test accuracy
value: 98.7
- type: accuracy
name: Czech Test accuracy
value: 89.3
- type: accuracy
name: Erzya Test accuracy
value: 50.9
- type: accuracy
name: Bhojpuri Test accuracy
value: 49.7
- type: accuracy
name: Thai Test accuracy
value: 43.4
- type: accuracy
name: Marathi Test accuracy
value: 82.2
- type: accuracy
name: Basque Test accuracy
value: 74.9
- type: accuracy
name: Slovak Test accuracy
value: 89.6
- type: accuracy
name: Kiche Test accuracy
value: 39.2
- type: accuracy
name: Yoruba Test accuracy
value: 28.8
- type: accuracy
name: Warlpiri Test accuracy
value: 36.4
- type: accuracy
name: Tamil Test accuracy
value: 82.2
- type: accuracy
name: Maltese Test accuracy
value: 36.2
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.0
- type: accuracy
name: Icelandic Test accuracy
value: 83.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.6
- type: accuracy
name: Urdu Test accuracy
value: 65.2
- type: accuracy
name: Romanian Test accuracy
value: 84.8
- type: accuracy
name: Persian Test accuracy
value: 76.7
- type: accuracy
name: Apurina Test accuracy
value: 37.3
- type: accuracy
name: Japanese Test accuracy
value: 19.9
- type: accuracy
name: Hungarian Test accuracy
value: 87.2
- type: accuracy
name: Hindi Test accuracy
value: 68.8
- type: accuracy
name: Classical Chinese Test accuracy
value: 19.2
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.6
- type: accuracy
name: Faroese Test accuracy
value: 76.4
- type: accuracy
name: Sanskrit Test accuracy
value: 38.4
- type: accuracy
name: Livvi Test accuracy
value: 64.0
- type: accuracy
name: Arabic Test accuracy
value: 79.2
- type: accuracy
name: Wolof Test accuracy
value: 38.2
- type: accuracy
name: Bulgarian Test accuracy
value: 89.9
- type: accuracy
name: Akuntsu Test accuracy
value: 43.4
- type: accuracy
name: Makurap Test accuracy
value: 23.3
- type: accuracy
name: Kangri Test accuracy
value: 44.9
- type: accuracy
name: Breton Test accuracy
value: 63.5
- type: accuracy
name: Telugu Test accuracy
value: 85.0
- type: accuracy
name: Cantonese Test accuracy
value: 40.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 57.8
- type: accuracy
name: Karelian Test accuracy
value: 73.3
- type: accuracy
name: Upper Sorbian Test accuracy
value: 75.8
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 64.0
- type: accuracy
name: Komi Zyrian Test accuracy
value: 44.2
- type: accuracy
name: Irish Test accuracy
value: 67.2
- type: accuracy
name: Nayini Test accuracy
value: 50.0
- type: accuracy
name: Munduruku Test accuracy
value: 28.8
- type: accuracy
name: Manx Test accuracy
value: 35.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 41.3
- type: accuracy
name: Afrikaans Test accuracy
value: 86.0
- type: accuracy
name: Old Turkish Test accuracy
value: 45.7
- type: accuracy
name: Tupinamba Test accuracy
value: 36.6
- type: accuracy
name: Belarusian Test accuracy
value: 86.0
- type: accuracy
name: Serbian Test accuracy
value: 90.4
- type: accuracy
name: Moksha Test accuracy
value: 47.7
- type: accuracy
name: Western Armenian Test accuracy
value: 78.7
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 54.8
- type: accuracy
name: Khunsari Test accuracy
value: 47.3
- type: accuracy
name: Hebrew Test accuracy
value: 91.7
- type: accuracy
name: Uyghur Test accuracy
value: 75.4
- type: accuracy
name: Chukchi Test accuracy
value: 34.9
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Catalan
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ca")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ca")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-de | eb3863c6e22a68188b8f48107a6ef8adbb5dca13 | 2022-02-25T09:58:16.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"de",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-de | 3 | null | transformers | 21,893 |
---
language:
- de
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-de
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 87.0
- type: accuracy
name: Dutch Test accuracy
value: 89.6
- type: accuracy
name: German Test accuracy
value: 97.2
- type: accuracy
name: Italian Test accuracy
value: 85.6
- type: accuracy
name: French Test accuracy
value: 84.8
- type: accuracy
name: Spanish Test accuracy
value: 88.4
- type: accuracy
name: Russian Test accuracy
value: 89.4
- type: accuracy
name: Swedish Test accuracy
value: 92.3
- type: accuracy
name: Norwegian Test accuracy
value: 87.7
- type: accuracy
name: Danish Test accuracy
value: 88.9
- type: accuracy
name: Low Saxon Test accuracy
value: 44.3
- type: accuracy
name: Akkadian Test accuracy
value: 21.4
- type: accuracy
name: Armenian Test accuracy
value: 85.6
- type: accuracy
name: Welsh Test accuracy
value: 69.0
- type: accuracy
name: Old East Slavic Test accuracy
value: 67.7
- type: accuracy
name: Albanian Test accuracy
value: 84.6
- type: accuracy
name: Slovenian Test accuracy
value: 76.5
- type: accuracy
name: Guajajara Test accuracy
value: 18.1
- type: accuracy
name: Kurmanji Test accuracy
value: 74.1
- type: accuracy
name: Turkish Test accuracy
value: 75.6
- type: accuracy
name: Finnish Test accuracy
value: 83.8
- type: accuracy
name: Indonesian Test accuracy
value: 82.2
- type: accuracy
name: Ukrainian Test accuracy
value: 89.0
- type: accuracy
name: Polish Test accuracy
value: 86.6
- type: accuracy
name: Portuguese Test accuracy
value: 87.8
- type: accuracy
name: Kazakh Test accuracy
value: 80.6
- type: accuracy
name: Latin Test accuracy
value: 75.8
- type: accuracy
name: Old French Test accuracy
value: 36.3
- type: accuracy
name: Buryat Test accuracy
value: 49.8
- type: accuracy
name: Kaapor Test accuracy
value: 11.7
- type: accuracy
name: Korean Test accuracy
value: 61.4
- type: accuracy
name: Estonian Test accuracy
value: 86.6
- type: accuracy
name: Croatian Test accuracy
value: 88.8
- type: accuracy
name: Gothic Test accuracy
value: 8.1
- type: accuracy
name: Swiss German Test accuracy
value: 54.4
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 25.0
- type: accuracy
name: Naija Test accuracy
value: 28.2
- type: accuracy
name: Latvian Test accuracy
value: 83.9
- type: accuracy
name: Chinese Test accuracy
value: 52.6
- type: accuracy
name: Tagalog Test accuracy
value: 72.1
- type: accuracy
name: Bambara Test accuracy
value: 17.5
- type: accuracy
name: Lithuanian Test accuracy
value: 82.6
- type: accuracy
name: Galician Test accuracy
value: 85.2
- type: accuracy
name: Vietnamese Test accuracy
value: 60.8
- type: accuracy
name: Greek Test accuracy
value: 88.7
- type: accuracy
name: Catalan Test accuracy
value: 86.8
- type: accuracy
name: Czech Test accuracy
value: 87.4
- type: accuracy
name: Erzya Test accuracy
value: 33.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 46.5
- type: accuracy
name: Thai Test accuracy
value: 62.4
- type: accuracy
name: Marathi Test accuracy
value: 86.5
- type: accuracy
name: Basque Test accuracy
value: 77.3
- type: accuracy
name: Slovak Test accuracy
value: 87.6
- type: accuracy
name: Kiche Test accuracy
value: 21.6
- type: accuracy
name: Yoruba Test accuracy
value: 16.6
- type: accuracy
name: Warlpiri Test accuracy
value: 21.5
- type: accuracy
name: Tamil Test accuracy
value: 84.2
- type: accuracy
name: Maltese Test accuracy
value: 15.3
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.0
- type: accuracy
name: Icelandic Test accuracy
value: 84.1
- type: accuracy
name: Mbya Guarani Test accuracy
value: 20.5
- type: accuracy
name: Urdu Test accuracy
value: 68.0
- type: accuracy
name: Romanian Test accuracy
value: 83.5
- type: accuracy
name: Persian Test accuracy
value: 76.0
- type: accuracy
name: Apurina Test accuracy
value: 22.2
- type: accuracy
name: Japanese Test accuracy
value: 36.2
- type: accuracy
name: Hungarian Test accuracy
value: 86.7
- type: accuracy
name: Hindi Test accuracy
value: 73.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 34.9
- type: accuracy
name: Faroese Test accuracy
value: 76.6
- type: accuracy
name: Sanskrit Test accuracy
value: 9.4
- type: accuracy
name: Livvi Test accuracy
value: 50.9
- type: accuracy
name: Arabic Test accuracy
value: 79.4
- type: accuracy
name: Wolof Test accuracy
value: 21.1
- type: accuracy
name: Bulgarian Test accuracy
value: 91.1
- type: accuracy
name: Akuntsu Test accuracy
value: 14.4
- type: accuracy
name: Makurap Test accuracy
value: 1.4
- type: accuracy
name: Kangri Test accuracy
value: 40.5
- type: accuracy
name: Breton Test accuracy
value: 60.0
- type: accuracy
name: Telugu Test accuracy
value: 83.2
- type: accuracy
name: Cantonese Test accuracy
value: 48.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 38.7
- type: accuracy
name: Karelian Test accuracy
value: 64.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 65.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 28.4
- type: accuracy
name: Irish Test accuracy
value: 66.3
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 8.0
- type: accuracy
name: Manx Test accuracy
value: 20.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 25.8
- type: accuracy
name: Afrikaans Test accuracy
value: 88.9
- type: accuracy
name: Old Turkish Test accuracy
value: 31.7
- type: accuracy
name: Tupinamba Test accuracy
value: 20.9
- type: accuracy
name: Belarusian Test accuracy
value: 89.5
- type: accuracy
name: Serbian Test accuracy
value: 89.8
- type: accuracy
name: Moksha Test accuracy
value: 31.3
- type: accuracy
name: Western Armenian Test accuracy
value: 77.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.5
- type: accuracy
name: Khunsari Test accuracy
value: 35.1
- type: accuracy
name: Hebrew Test accuracy
value: 91.7
- type: accuracy
name: Uyghur Test accuracy
value: 71.5
- type: accuracy
name: Chukchi Test accuracy
value: 29.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: German
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-de")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-de")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-fro | 2fb7641edbd96ddacac82bb3d5dcff3b05ca2769 | 2022-02-25T09:58:31.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"fro",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-fro | 3 | null | transformers | 21,894 |
---
language:
- fro
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-fro
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 73.4
- type: accuracy
name: Dutch Test accuracy
value: 73.1
- type: accuracy
name: German Test accuracy
value: 70.7
- type: accuracy
name: Italian Test accuracy
value: 72.6
- type: accuracy
name: French Test accuracy
value: 79.3
- type: accuracy
name: Spanish Test accuracy
value: 78.0
- type: accuracy
name: Russian Test accuracy
value: 68.8
- type: accuracy
name: Swedish Test accuracy
value: 76.8
- type: accuracy
name: Norwegian Test accuracy
value: 69.6
- type: accuracy
name: Danish Test accuracy
value: 74.2
- type: accuracy
name: Low Saxon Test accuracy
value: 40.3
- type: accuracy
name: Akkadian Test accuracy
value: 38.3
- type: accuracy
name: Armenian Test accuracy
value: 64.7
- type: accuracy
name: Welsh Test accuracy
value: 56.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 67.5
- type: accuracy
name: Albanian Test accuracy
value: 66.5
- type: accuracy
name: Slovenian Test accuracy
value: 64.2
- type: accuracy
name: Guajajara Test accuracy
value: 15.0
- type: accuracy
name: Kurmanji Test accuracy
value: 59.9
- type: accuracy
name: Turkish Test accuracy
value: 57.2
- type: accuracy
name: Finnish Test accuracy
value: 66.3
- type: accuracy
name: Indonesian Test accuracy
value: 66.9
- type: accuracy
name: Ukrainian Test accuracy
value: 66.7
- type: accuracy
name: Polish Test accuracy
value: 67.3
- type: accuracy
name: Portuguese Test accuracy
value: 73.1
- type: accuracy
name: Kazakh Test accuracy
value: 58.5
- type: accuracy
name: Latin Test accuracy
value: 65.3
- type: accuracy
name: Old French Test accuracy
value: 93.3
- type: accuracy
name: Buryat Test accuracy
value: 43.2
- type: accuracy
name: Kaapor Test accuracy
value: 25.8
- type: accuracy
name: Korean Test accuracy
value: 50.3
- type: accuracy
name: Estonian Test accuracy
value: 66.1
- type: accuracy
name: Croatian Test accuracy
value: 72.0
- type: accuracy
name: Gothic Test accuracy
value: 38.1
- type: accuracy
name: Swiss German Test accuracy
value: 34.6
- type: accuracy
name: Assyrian Test accuracy
value: 8.2
- type: accuracy
name: North Sami Test accuracy
value: 23.0
- type: accuracy
name: Naija Test accuracy
value: 40.4
- type: accuracy
name: Latvian Test accuracy
value: 65.2
- type: accuracy
name: Chinese Test accuracy
value: 36.4
- type: accuracy
name: Tagalog Test accuracy
value: 53.3
- type: accuracy
name: Bambara Test accuracy
value: 13.4
- type: accuracy
name: Lithuanian Test accuracy
value: 64.1
- type: accuracy
name: Galician Test accuracy
value: 71.6
- type: accuracy
name: Vietnamese Test accuracy
value: 46.7
- type: accuracy
name: Greek Test accuracy
value: 72.9
- type: accuracy
name: Catalan Test accuracy
value: 76.9
- type: accuracy
name: Czech Test accuracy
value: 68.8
- type: accuracy
name: Erzya Test accuracy
value: 25.4
- type: accuracy
name: Bhojpuri Test accuracy
value: 41.2
- type: accuracy
name: Thai Test accuracy
value: 52.2
- type: accuracy
name: Marathi Test accuracy
value: 51.5
- type: accuracy
name: Basque Test accuracy
value: 59.6
- type: accuracy
name: Slovak Test accuracy
value: 70.7
- type: accuracy
name: Kiche Test accuracy
value: 19.7
- type: accuracy
name: Yoruba Test accuracy
value: 18.3
- type: accuracy
name: Warlpiri Test accuracy
value: 15.8
- type: accuracy
name: Tamil Test accuracy
value: 62.0
- type: accuracy
name: Maltese Test accuracy
value: 28.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 56.3
- type: accuracy
name: Icelandic Test accuracy
value: 70.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 16.8
- type: accuracy
name: Urdu Test accuracy
value: 54.2
- type: accuracy
name: Romanian Test accuracy
value: 69.1
- type: accuracy
name: Persian Test accuracy
value: 65.4
- type: accuracy
name: Apurina Test accuracy
value: 24.5
- type: accuracy
name: Japanese Test accuracy
value: 31.0
- type: accuracy
name: Hungarian Test accuracy
value: 62.5
- type: accuracy
name: Hindi Test accuracy
value: 58.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 41.9
- type: accuracy
name: Komi Permyak Test accuracy
value: 30.3
- type: accuracy
name: Faroese Test accuracy
value: 62.5
- type: accuracy
name: Sanskrit Test accuracy
value: 37.8
- type: accuracy
name: Livvi Test accuracy
value: 40.2
- type: accuracy
name: Arabic Test accuracy
value: 66.2
- type: accuracy
name: Wolof Test accuracy
value: 26.8
- type: accuracy
name: Bulgarian Test accuracy
value: 72.5
- type: accuracy
name: Akuntsu Test accuracy
value: 24.2
- type: accuracy
name: Makurap Test accuracy
value: 19.2
- type: accuracy
name: Kangri Test accuracy
value: 36.4
- type: accuracy
name: Breton Test accuracy
value: 47.3
- type: accuracy
name: Telugu Test accuracy
value: 58.4
- type: accuracy
name: Cantonese Test accuracy
value: 33.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 57.3
- type: accuracy
name: Karelian Test accuracy
value: 49.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 52.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 48.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 26.6
- type: accuracy
name: Irish Test accuracy
value: 46.7
- type: accuracy
name: Nayini Test accuracy
value: 41.0
- type: accuracy
name: Munduruku Test accuracy
value: 15.6
- type: accuracy
name: Manx Test accuracy
value: 16.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 20.0
- type: accuracy
name: Afrikaans Test accuracy
value: 77.0
- type: accuracy
name: Old Turkish Test accuracy
value: 2.7
- type: accuracy
name: Tupinamba Test accuracy
value: 23.5
- type: accuracy
name: Belarusian Test accuracy
value: 67.8
- type: accuracy
name: Serbian Test accuracy
value: 74.1
- type: accuracy
name: Moksha Test accuracy
value: 27.3
- type: accuracy
name: Western Armenian Test accuracy
value: 61.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 42.8
- type: accuracy
name: Khunsari Test accuracy
value: 32.4
- type: accuracy
name: Hebrew Test accuracy
value: 62.5
- type: accuracy
name: Uyghur Test accuracy
value: 55.0
- type: accuracy
name: Chukchi Test accuracy
value: 20.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old French
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro")
```
|
wietsedv/xlm-roberta-base-ft-udpos28-la | 41fa99ec6490bae56b50aa06ef652ff4edd4a28a | 2022-02-25T09:58:58.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"la",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-la | 3 | null | transformers | 21,895 |
---
language:
- la
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-la
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 81.5
- type: accuracy
name: Dutch Test accuracy
value: 79.6
- type: accuracy
name: German Test accuracy
value: 78.2
- type: accuracy
name: Italian Test accuracy
value: 78.0
- type: accuracy
name: French Test accuracy
value: 78.1
- type: accuracy
name: Spanish Test accuracy
value: 79.8
- type: accuracy
name: Russian Test accuracy
value: 89.8
- type: accuracy
name: Swedish Test accuracy
value: 86.0
- type: accuracy
name: Norwegian Test accuracy
value: 81.5
- type: accuracy
name: Danish Test accuracy
value: 85.7
- type: accuracy
name: Low Saxon Test accuracy
value: 56.6
- type: accuracy
name: Akkadian Test accuracy
value: 44.7
- type: accuracy
name: Armenian Test accuracy
value: 86.4
- type: accuracy
name: Welsh Test accuracy
value: 65.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 79.8
- type: accuracy
name: Albanian Test accuracy
value: 74.9
- type: accuracy
name: Slovenian Test accuracy
value: 77.4
- type: accuracy
name: Guajajara Test accuracy
value: 35.8
- type: accuracy
name: Kurmanji Test accuracy
value: 77.7
- type: accuracy
name: Turkish Test accuracy
value: 76.9
- type: accuracy
name: Finnish Test accuracy
value: 84.9
- type: accuracy
name: Indonesian Test accuracy
value: 82.0
- type: accuracy
name: Ukrainian Test accuracy
value: 87.8
- type: accuracy
name: Polish Test accuracy
value: 88.0
- type: accuracy
name: Portuguese Test accuracy
value: 82.3
- type: accuracy
name: Kazakh Test accuracy
value: 83.2
- type: accuracy
name: Latin Test accuracy
value: 92.9
- type: accuracy
name: Old French Test accuracy
value: 61.2
- type: accuracy
name: Buryat Test accuracy
value: 64.7
- type: accuracy
name: Kaapor Test accuracy
value: 34.2
- type: accuracy
name: Korean Test accuracy
value: 63.0
- type: accuracy
name: Estonian Test accuracy
value: 85.5
- type: accuracy
name: Croatian Test accuracy
value: 86.3
- type: accuracy
name: Gothic Test accuracy
value: 36.5
- type: accuracy
name: Swiss German Test accuracy
value: 47.8
- type: accuracy
name: Assyrian Test accuracy
value: 15.5
- type: accuracy
name: North Sami Test accuracy
value: 41.4
- type: accuracy
name: Naija Test accuracy
value: 41.9
- type: accuracy
name: Latvian Test accuracy
value: 89.1
- type: accuracy
name: Chinese Test accuracy
value: 44.3
- type: accuracy
name: Tagalog Test accuracy
value: 73.7
- type: accuracy
name: Bambara Test accuracy
value: 27.9
- type: accuracy
name: Lithuanian Test accuracy
value: 88.3
- type: accuracy
name: Galician Test accuracy
value: 81.7
- type: accuracy
name: Vietnamese Test accuracy
value: 68.0
- type: accuracy
name: Greek Test accuracy
value: 74.9
- type: accuracy
name: Catalan Test accuracy
value: 76.2
- type: accuracy
name: Czech Test accuracy
value: 86.3
- type: accuracy
name: Erzya Test accuracy
value: 50.8
- type: accuracy
name: Bhojpuri Test accuracy
value: 52.5
- type: accuracy
name: Thai Test accuracy
value: 61.6
- type: accuracy
name: Marathi Test accuracy
value: 88.3
- type: accuracy
name: Basque Test accuracy
value: 79.0
- type: accuracy
name: Slovak Test accuracy
value: 85.9
- type: accuracy
name: Kiche Test accuracy
value: 39.3
- type: accuracy
name: Yoruba Test accuracy
value: 29.9
- type: accuracy
name: Warlpiri Test accuracy
value: 40.9
- type: accuracy
name: Tamil Test accuracy
value: 85.7
- type: accuracy
name: Maltese Test accuracy
value: 32.8
- type: accuracy
name: Ancient Greek Test accuracy
value: 70.5
- type: accuracy
name: Icelandic Test accuracy
value: 81.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 33.1
- type: accuracy
name: Urdu Test accuracy
value: 61.3
- type: accuracy
name: Romanian Test accuracy
value: 83.1
- type: accuracy
name: Persian Test accuracy
value: 75.7
- type: accuracy
name: Apurina Test accuracy
value: 43.5
- type: accuracy
name: Japanese Test accuracy
value: 36.5
- type: accuracy
name: Hungarian Test accuracy
value: 74.5
- type: accuracy
name: Hindi Test accuracy
value: 67.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 38.2
- type: accuracy
name: Komi Permyak Test accuracy
value: 52.2
- type: accuracy
name: Faroese Test accuracy
value: 75.6
- type: accuracy
name: Sanskrit Test accuracy
value: 43.5
- type: accuracy
name: Livvi Test accuracy
value: 66.1
- type: accuracy
name: Arabic Test accuracy
value: 81.3
- type: accuracy
name: Wolof Test accuracy
value: 39.1
- type: accuracy
name: Bulgarian Test accuracy
value: 87.7
- type: accuracy
name: Akuntsu Test accuracy
value: 35.5
- type: accuracy
name: Makurap Test accuracy
value: 28.8
- type: accuracy
name: Kangri Test accuracy
value: 49.8
- type: accuracy
name: Breton Test accuracy
value: 59.8
- type: accuracy
name: Telugu Test accuracy
value: 84.3
- type: accuracy
name: Cantonese Test accuracy
value: 50.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 55.7
- type: accuracy
name: Karelian Test accuracy
value: 73.0
- type: accuracy
name: Upper Sorbian Test accuracy
value: 76.0
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 68.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 46.3
- type: accuracy
name: Irish Test accuracy
value: 64.1
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 24.1
- type: accuracy
name: Manx Test accuracy
value: 39.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 43.5
- type: accuracy
name: Afrikaans Test accuracy
value: 74.8
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 45.2
- type: accuracy
name: Belarusian Test accuracy
value: 89.1
- type: accuracy
name: Serbian Test accuracy
value: 87.2
- type: accuracy
name: Moksha Test accuracy
value: 47.3
- type: accuracy
name: Western Armenian Test accuracy
value: 81.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 55.3
- type: accuracy
name: Khunsari Test accuracy
value: 43.2
- type: accuracy
name: Hebrew Test accuracy
value: 89.6
- type: accuracy
name: Uyghur Test accuracy
value: 76.8
- type: accuracy
name: Chukchi Test accuracy
value: 36.3
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Latin
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la")
```
|
inovex/multi2convai-corona-fr-bert | 244944e9a31a9042ac45f4fb73a47293c131a645 | 2022-04-15T17:09:57.000Z | [
"pytorch",
"bert",
"text-classification",
"fr",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-corona-fr-bert | 3 | null | transformers | 21,896 | ---
tags:
- text-classification
widget:
- text: "Dois-je porter un masque?"
license: mit
language: fr
---
# Multi2ConvAI-Corona: finetuned Bert for French
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2conv.ai/en/blog/use-cases), [de](https://multi2conv.ai/en/blog/use-cases)))
- language: French (fr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-fr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-fr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-corona-it-bert | e942c6cb58779ffebdc3215d86ca48cce3592620 | 2022-03-01T09:20:35.000Z | [
"pytorch",
"bert",
"text-classification",
"it",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-corona-it-bert | 3 | null | transformers | 21,897 | ---
tags:
- text-classification
widget:
- text: "Devo indossare una maschera?"
license: mit
language: it
---
# Multi2ConvAI-Corona: finetuned Bert for Italian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Italian (it)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-it-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-it-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
inovex/multi2convai-logistics-hr-bert | e77be6de312152b5cd49bd28d889b15ada4de072 | 2022-03-01T09:22:15.000Z | [
"pytorch",
"bert",
"text-classification",
"hr",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-logistics-hr-bert | 3 | null | transformers | 21,898 | ---
tags:
- text-classification
widget:
- text: "gdje mogu staviti paket?"
license: mit
language: hr
---
# Multi2ConvAI-Logistics: finetuned Bert for Croatian
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: Croatian (hr)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-hr-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-hr-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
Rattana/wav2vec2-thai-ASR | bcfe042eceaa204c93d63db2aea4e856aa8460e6 | 2022-02-25T02:08:35.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Rattana | null | Rattana/wav2vec2-thai-ASR | 3 | null | transformers | 21,899 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-thai-ASR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6108
- Wer: 0.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.1123 | 2.65 | 400 | 3.3946 | 1.0002 |
| 1.5734 | 5.3 | 800 | 0.6881 | 0.7290 |
| 0.5934 | 7.94 | 1200 | 0.5789 | 0.6402 |
| 0.4059 | 10.59 | 1600 | 0.5496 | 0.5976 |
| 0.3136 | 13.24 | 2000 | 0.6109 | 0.5863 |
| 0.2546 | 15.89 | 2400 | 0.6113 | 0.5865 |
| 0.2184 | 18.54 | 2800 | 0.6108 | 0.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.