modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-wls-en | be27f9d60a91dcf8571b7c23f90f41f375fe9bec | 2021-09-11T10:52:08.000Z | [
"pytorch",
"marian",
"text2text-generation",
"wls",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-wls-en | 13 | null | transformers | 10,100 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: [wls-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.en | 31.8 | 0.471 |
|
Helsinki-NLP/opus-mt-zh-bg | 3e0338a917b3900ae60979ce384ca2d40a8d4b85 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"bg",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-bg | 13 | null | transformers | 10,101 | ---
language:
- zh
- bg
tags:
- translation
license: apache-2.0
---
### zho-bul
* source group: Chinese
* target group: Bulgarian
* OPUS readme: [zho-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md)
* model: transformer
* source language(s): cmn cmn_Hans cmn_Hant zho zho_Hans zho_Hant
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cmn_Hani.bul | 29.6 | 0.497 |
| Tatoeba-test.zho.bul | 29.6 | 0.497 |
### System Info:
- hf_name: zho-bul
- source_languages: zho
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'bg']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt
- src_alpha3: zho
- tgt_alpha3: bul
- short_pair: zh-bg
- chrF2_score: 0.49700000000000005
- bleu: 29.6
- brevity_penalty: 0.883
- ref_len: 3113.0
- src_name: Chinese
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: zh
- tgt_alpha2: bg
- prefer_old: False
- long_pair: zho-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
HeyLucasLeao/byt5-small-pt-product-reviews | 0282af39a678cf016a1ce451a814d5a6f738a788 | 2021-08-25T17:02:07.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | HeyLucasLeao | null | HeyLucasLeao/byt5-small-pt-product-reviews | 13 | null | transformers | 10,102 | Create README.md
## ByT5 Small Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://colab.research.google.com/drive/1EChTeQkGeXi_52lClBNazHVuSNKEHN2f
##### Colab for Metrics: https://colab.research.google.com/drive/1o4tcsP3lpr1TobtE3Txhp9fllxPWXxlw#scrollTo=PXAoog5vQaTn
#### Score:
```python
Training Set:
'accuracy': 0.8974239585927603,
'f1': 0.927229848590765,
'precision': 0.9580290812115055,
'recall': 0.8983492356469835
Test Set:
'accuracy': 0.8957881282882026,
'f1': 0.9261366030421776,
'precision': 0.9559431131213848,
'recall': 0.8981326359661668
Validation Set:
'accuracy': 0.8925383190163382,
'f1': 0.9239208204149773,
'precision': 0.9525448733710351,
'recall': 0.8969668904839083
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` |
HueyNemud/berties | 449ee926f5b7b92d0388c6a03575dad62f748ba9 | 2022-02-08T08:47:31.000Z | [
"pytorch",
"camembert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | HueyNemud | null | HueyNemud/berties | 13 | null | transformers | 10,103 | Entry not found |
Javel/linkedin_post_t5 | d1ccb77e221bad4f009af0fb30c622bb5a9ee248 | 2021-06-23T02:28:31.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Javel | null | Javel/linkedin_post_t5 | 13 | null | transformers | 10,104 | Entry not found |
JaviBJ/sagemaker-distilbert-emotion | c527510b63a65f40ee9fb69af41cca7e64c5d8a7 | 2021-11-17T17:02:01.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | JaviBJ | null | JaviBJ/sagemaker-distilbert-emotion | 13 | null | transformers | 10,105 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9351 | 1.0 | 500 | 0.2469 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Jonesy/DialoGPT-medium_Barney | 12eaf74fbdbdae5a76dae3c6d46f18de72c41c38 | 2022-01-06T23:36:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Jonesy | null | Jonesy/DialoGPT-medium_Barney | 13 | null | transformers | 10,106 | ---
tags:
- conversational
---
# Barney Calhoun DialoGPT Model |
JorisCos/VAD_Net | e5ac72157af05eea7d58eb3fef7ed78f7fa7a884 | 2021-11-22T17:17:23.000Z | [
"pytorch",
"dataset:LibriVAD",
"asteroid",
"audio",
"VADNet",
"VAD",
"Voice Activity Detection",
"license:cc-by-sa-4.0"
]
| null | false | JorisCos | null | JorisCos/VAD_Net | 13 | null | asteroid | 10,107 | ---
tags:
- asteroid
- audio
- VADNet
- VAD
- Voice Activity Detection
datasets:
- LibriVAD
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/VAD_Net`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
segment: 3
train_dir: /home/jcosentino/VAD_dataset/metadata/sets/train.json
valid_dir: /home/jcosentino/VAD_dataset/metadata/sets/dev.json
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/full_not_causal_f1/
help: null
masknet:
bn_chan: 128
causal: false
hid_chan: 512
mask_act: relu
n_blocks: 3
n_repeats: 5
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On LibriVAD min test set :
```yml
accuracy: 0.8196149023502931,
precision: 0.8305009048356607,
recall: 0.8869202491310206,
f1_score: 0.8426184545700124
```
License notice:
This work "VAD_Net" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The [DNS challenge](https://github.com/microsoft/DNS-Challenge) noises, [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
"VAD_Net" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
M-FAC/bert-mini-finetuned-qnli | bb1b578b331bd86fe4fbb0fc039cdb631a7b0d0b | 2021-12-13T08:16:16.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"transformers"
]
| text-classification | false | M-FAC | null | M-FAC/bert-mini-finetuned-qnli | 13 | null | transformers | 10,108 | # BERT-mini model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 83.90
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 83.85 ± 0.10 |
| M-FAC | 83.70 ± 0.13 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
|
M47Labs/it_iptc | e9fc9a2e2575adad2717b7b18974ac774ca3114a | 2021-10-21T10:01:31.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | M47Labs | null | M47Labs/it_iptc | 13 | 3 | transformers | 10,109 | Entry not found |
Maelstrom77/roberta-large-qqp | 823f48f64e13d9de3e48510716ec9a7bb323a31e | 2021-10-04T14:49:15.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/roberta-large-qqp | 13 | null | transformers | 10,110 | Entry not found |
Maelstrom77/roberta-large-snli | baf82f2ef15463f9393fa8ff9cdf65c0ae7ab41f | 2021-10-04T13:33:01.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Maelstrom77 | null | Maelstrom77/roberta-large-snli | 13 | null | transformers | 10,111 | Entry not found |
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8 | f9b4d5f20958a8384f975495050c22f6174add02 | 2021-11-27T23:02:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Matthijsvanhof | null | Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8 | 13 | null | transformers | 10,112 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-dutch-cased-finetuned-NER8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-NER8
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1482
- Precision: 0.4716
- Recall: 0.4359
- F1: 0.4530
- Accuracy: 0.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 68 | 0.1705 | 0.3582 | 0.3488 | 0.3535 | 0.9475 |
| No log | 2.0 | 136 | 0.1482 | 0.4716 | 0.4359 | 0.4530 | 0.9569 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
Media1129/keyword-tag-model-4000 | b9fd203e646058b841915c41eac447d5db4211f5 | 2021-08-30T04:49:52.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Media1129 | null | Media1129/keyword-tag-model-4000 | 13 | null | transformers | 10,113 | Entry not found |
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French | 5a3e6bcb5cb6ec68d4a596bfe026191da0dc9022 | 2021-07-05T15:56:43.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | MehdiHosseiniMoghadam | null | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French | 13 | null | transformers | 10,114 | ---
language: fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-large-xlsr-53-French by Mehdi Hosseini Moghadam
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fr
type: common_voice
args: fr
metrics:
- name: Test WER
type: wer
value: 34.856015
---
# wav2vec2-large-xlsr-53-French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the French test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fr", split="test[:10%]")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.856015 %
## Training
10% of the Common Voice `train`, `validation` datasets were used for training.
## Testing
10% of the Common Voice `Test` dataset were used for training. |
NoLawz/DialoGPT-medium-spongebob | 721ef0f41e0e928acda25a53e6f907c48602993d | 2021-08-27T06:18:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | NoLawz | null | NoLawz/DialoGPT-medium-spongebob | 13 | null | transformers | 10,115 | ---
tags:
- conversational
---
# Spong Bob DialoGPT medium model |
Nymiz/eus-es | cf34d3019721eca655b3611bd903e701adcad01d | 2022-02-15T12:23:21.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Nymiz | null | Nymiz/eus-es | 13 | null | transformers | 10,116 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the [Euskera-Spanish](https://huggingface.co/datasets/Nymiz/euskera-spanish) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0439
- Precision: 0.9565
- Recall: 0.9429
- F1: 0.9496
- Accuracy: 0.9931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.4269 | 0.0 | 0.0 | 0.0 | 0.8945 |
| No log | 2.0 | 28 | 0.1628 | 0.5143 | 0.5143 | 0.5143 | 0.9599 |
| No log | 3.0 | 42 | 0.0969 | 0.7730 | 0.7786 | 0.7758 | 0.9815 |
| No log | 4.0 | 56 | 0.0550 | 0.7267 | 0.7786 | 0.7517 | 0.9890 |
| No log | 5.0 | 70 | 0.0582 | 0.8643 | 0.8643 | 0.8643 | 0.9894 |
| No log | 6.0 | 84 | 0.0420 | 0.8936 | 0.9 | 0.8968 | 0.9918 |
| No log | 7.0 | 98 | 0.0314 | 0.8690 | 0.9 | 0.8842 | 0.9931 |
| No log | 8.0 | 112 | 0.0396 | 0.8601 | 0.8786 | 0.8693 | 0.9911 |
| No log | 9.0 | 126 | 0.0476 | 0.9 | 0.9 | 0.9 | 0.9924 |
| No log | 10.0 | 140 | 0.0510 | 0.8881 | 0.9071 | 0.8975 | 0.9921 |
| No log | 11.0 | 154 | 0.0523 | 0.9270 | 0.9071 | 0.9170 | 0.9916 |
| No log | 12.0 | 168 | 0.0391 | 0.9034 | 0.9357 | 0.9193 | 0.9928 |
| No log | 13.0 | 182 | 0.0378 | 0.9167 | 0.9429 | 0.9296 | 0.9928 |
| No log | 14.0 | 196 | 0.0419 | 0.9161 | 0.9357 | 0.9258 | 0.9926 |
| No log | 15.0 | 210 | 0.0490 | 0.9286 | 0.9286 | 0.9286 | 0.9921 |
| No log | 16.0 | 224 | 0.0526 | 0.9155 | 0.9286 | 0.9220 | 0.9918 |
| No log | 17.0 | 238 | 0.0504 | 0.9091 | 0.9286 | 0.9187 | 0.9916 |
| No log | 18.0 | 252 | 0.0516 | 0.9149 | 0.9214 | 0.9181 | 0.9923 |
| No log | 19.0 | 266 | 0.0497 | 0.9291 | 0.9357 | 0.9324 | 0.9926 |
| No log | 20.0 | 280 | 0.0599 | 0.9220 | 0.9286 | 0.9253 | 0.9916 |
| No log | 21.0 | 294 | 0.0548 | 0.9281 | 0.9214 | 0.9247 | 0.9923 |
| No log | 22.0 | 308 | 0.0430 | 0.9424 | 0.9357 | 0.9391 | 0.9934 |
| No log | 23.0 | 322 | 0.0439 | 0.9565 | 0.9429 | 0.9496 | 0.9931 |
| No log | 24.0 | 336 | 0.0501 | 0.9565 | 0.9429 | 0.9496 | 0.9931 |
| No log | 25.0 | 350 | 0.0462 | 0.9496 | 0.9429 | 0.9462 | 0.9929 |
| No log | 26.0 | 364 | 0.0479 | 0.9565 | 0.9429 | 0.9496 | 0.9931 |
| No log | 27.0 | 378 | 0.0496 | 0.9429 | 0.9429 | 0.9429 | 0.9924 |
| No log | 28.0 | 392 | 0.0446 | 0.9565 | 0.9429 | 0.9496 | 0.9931 |
| No log | 29.0 | 406 | 0.0447 | 0.9496 | 0.9429 | 0.9462 | 0.9932 |
| No log | 30.0 | 420 | 0.0491 | 0.9496 | 0.9429 | 0.9462 | 0.9928 |
| No log | 31.0 | 434 | 0.0430 | 0.9167 | 0.9429 | 0.9296 | 0.9934 |
| No log | 32.0 | 448 | 0.0530 | 0.9496 | 0.9429 | 0.9462 | 0.9929 |
| No log | 33.0 | 462 | 0.0547 | 0.9496 | 0.9429 | 0.9462 | 0.9928 |
| No log | 34.0 | 476 | 0.0515 | 0.9429 | 0.9429 | 0.9429 | 0.9929 |
| No log | 35.0 | 490 | 0.0533 | 0.9429 | 0.9429 | 0.9429 | 0.9929 |
| 0.0625 | 36.0 | 504 | 0.0543 | 0.9496 | 0.9429 | 0.9462 | 0.9928 |
| 0.0625 | 37.0 | 518 | 0.0545 | 0.9496 | 0.9429 | 0.9462 | 0.9928 |
| 0.0625 | 38.0 | 532 | 0.0545 | 0.9357 | 0.9357 | 0.9357 | 0.9924 |
| 0.0625 | 39.0 | 546 | 0.0548 | 0.9357 | 0.9357 | 0.9357 | 0.9923 |
| 0.0625 | 40.0 | 560 | 0.0549 | 0.9357 | 0.9357 | 0.9357 | 0.9923 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan | 5f98c04c12ee1573d3d4ee585da42638ed7de643 | 2022-03-29T08:51:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ca",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | PereLluis13 | null | PereLluis13/Wav2Vec2-Large-XLSR-53-catalan | 13 | null | transformers | 10,117 | ---
language: ca
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Catalan XLSR Wav2Vec Large 53 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ca
type: common_voice
args: ca #TODO:
metrics:
- name: Test WER
type: wer
value: 8.11
---
# Disclaimer
This model was trained on Common Voice 6, if you need a catalan model for ASR, I recommend checking [wav2vec2-xls-r-1b-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-1b-ca-lm) which is a 1b model with a LM on top trained on CV8+ with much better performance or [wav2vec2-xls-r-300m-ca-lm](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) which has the same size (300m) as this model but trained on CV8+ and the same LM.
# Wav2Vec2-Large-XLSR-53-ca
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the catalan test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ca", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
import jiwer
# Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
```
**Test Result**: 8.11 %
## Training
The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up.
The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset. |
PinoCorgi/DialoGPT-small-Shrek1 | deb3157ed384de337a96be13c93cb72dac5c5242 | 2022-02-02T12:56:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | PinoCorgi | null | PinoCorgi/DialoGPT-small-Shrek1 | 13 | null | transformers | 10,118 | ---
tags:
- conversational
---
@ Shrek DialoGPT Model
|
PubChimps/dlfBERT | 22e61b852e1ee6444cc38044662d2f9b4064c695 | 2021-05-20T12:18:47.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | PubChimps | null | PubChimps/dlfBERT | 13 | null | transformers | 10,119 | Entry not found |
SEBIS/code_trans_t5_base_code_comment_generation_java_multitask | 15fbff39a195ffe8efa1414ce8aab8c4a920e462 | 2021-06-23T04:06:42.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_comment_generation_java_multitask | 13 | null | transformers | 10,120 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/code%20comment%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune | f7499ff947275c82c0f79f908ce0d56912851026 | 2021-06-23T04:10:25.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune | 13 | null | transformers | 10,121 | ---
tags:
- summarization
widget:
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
---
# CodeTrans model for code comment generation java
Pretrained model on programming language java using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code comment generation task for the java function/method.
## Intended uses & limitations
The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/code%20comment%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 37.98 |
| CodeTrans-ST-Base | 38.07 |
| CodeTrans-TF-Small | 38.56 |
| CodeTrans-TF-Base | 39.06 |
| CodeTrans-TF-Large | **39.50** |
| CodeTrans-MT-Small | 20.15 |
| CodeTrans-MT-Base | 27.44 |
| CodeTrans-MT-Large | 34.69 |
| CodeTrans-MT-TF-Small | 38.37 |
| CodeTrans-MT-TF-Base | 38.90 |
| CodeTrans-MT-TF-Large | 39.25 |
| State of the art | 38.17 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_base_code_documentation_generation_php | 96a6e232af10ae0dba04b1c94e7e83747b66d13c | 2021-06-23T04:34:53.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_code_documentation_generation_php | 13 | 1 | transformers | 10,122 | ---
tags:
- summarization
widget:
- text: "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
---
# CodeTrans model for code documentation generation php
Pretrained model on programming language php using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus php dataset.
## Intended uses & limitations
The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_php", skip_special_tokens=True),
device=0
)
tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/php/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask | 86a3356443077a006a2162422c8312ce93669825 | 2021-06-23T09:37:43.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask | 13 | null | transformers | 10,123 | ---
tags:
- summarization
widget:
- text: "select time ( col0 ) from tab0"
---
# CodeTrans model for source code summarization sql
Pretrained model on programming language sql using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "select time ( col0 ) from tab0"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune | 0fbc65f12ab660e8c058a402600dabdb7ca3835d | 2021-06-23T09:55:18.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune | 13 | null | transformers | 10,124 | ---
tags:
- summarization
widget:
- text: "parse the uses licence node of this package , if any , and returns the license definition if theres"
---
# CodeTrans model for api recommendation generation
Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans).
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis.
## Intended uses & limitations
The model could be used to generate api usage for the java programming tasks.
### How to use
Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/api%20generation/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 68.71 |
| CodeTrans-ST-Base | 70.45 |
| CodeTrans-TF-Small | 68.90 |
| CodeTrans-TF-Base | 72.11 |
| CodeTrans-TF-Large | 73.26 |
| CodeTrans-MT-Small | 58.43 |
| CodeTrans-MT-Base | 67.97 |
| CodeTrans-MT-Large | 72.29 |
| CodeTrans-MT-TF-Small | 69.29 |
| CodeTrans-MT-TF-Base | 72.89 |
| CodeTrans-MT-TF-Large | **73.39** |
| State of the art | 54.42 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune | 9ee7704542205f68824ce6575ee028af2e823d5c | 2021-06-23T10:23:12.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
]
| summarization | false | SEBIS | null | SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune | 13 | null | transformers | 10,125 | ---
tags:
- summarization
widget:
- text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
---
# CodeTrans model for source code summarization python
Pretrained model on programming language python using the t5 small model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions.
## Model description
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets.
## Intended uses & limitations
The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) '''
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.
## Evaluation results
For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | SQL | C# |
| -------------------- | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 |
| CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 |
| CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 |
| CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 |
| CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 |
| CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 |
| CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 |
| CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** |
| CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 |
| CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 |
| CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 |
| CODE-NN | -- | 18.40 | 20.50 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
SEBIS/legal_t5_small_trans_de_en | f66662e3ea1a60ff2c4590332e6d83602ec4ae05 | 2021-06-23T09:27:47.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Deustch English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Deustch English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_de_en | 13 | null | transformers | 10,126 |
---
language: Deustch English
tags:
- translation Deustch English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "(2) Die Richtlinie 80/987/EWG des Rates(4) soll den Arbeitnehmern im Fall der Zahlungsunfähigkeit ihres Arbeitgebers einen Mindestschutz gewähren. Deshalb verpflichtet sie die Mitgliedstaaten zur Schaffung einer Einrichtung, die die Befriedigung der nicht erfuellten Arbeitnehmeransprüche garantiert."
---
# legal_t5_small_trans_de_en model
Model on translating legal text from Deustch to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_de_en is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Deustch to English.
### How to use
Here is how to use this model to translate legal text from Deustch to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_en"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
de_text = "Eisenbahnunternehmen müssen Fahrkarten über mindestens einen der folgenden Vertriebswege anbieten: an Fahrkartenschaltern oder Fahrkartenautomaten, per Telefon, Internet oder jede andere in weitem Umfang verfügbare Informationstechnik oder in den Zügen."
pipeline([de_text], max_length=512)
```
## Training data
The legal_t5_small_trans_de_en model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 5 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_de_en | 49.1|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
SEBIS/legal_t5_small_trans_it_en_small_finetuned | e37e26511d0d670a58ee1982986e604283ebf306 | 2021-06-23T10:01:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"Italian English",
"dataset:dcep europarl jrc-acquis",
"transformers",
"translation Italian English model",
"autotrain_compatible"
]
| text2text-generation | false | SEBIS | null | SEBIS/legal_t5_small_trans_it_en_small_finetuned | 13 | null | transformers | 10,127 |
---
language: Italian English
tags:
- translation Italian English model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "Supplenti presenti al momento della votazione finale"
---
# legal_t5_small_trans_it_en_small_finetuned model
Model on translating legal text from Italian to English. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_it_en_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_it_en_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Italian to English.
### How to use
Here is how to use this model to translate legal text from Italian to English in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_it_en_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_it_en", do_lower_case=False,
skip_special_tokens=True),
device=0
)
it_text = "Supplenti presenti al momento della votazione finale"
pipeline([it_text], max_length=512)
```
## Training data
The legal_t5_small_trans_it_en_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 9 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_it_en_small_finetuned | 49.840|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
Sakil/imdbsentdistilbertmodel | d1d84ceb289bd1b562383a3be84f7ddd27f3269e | 2022-01-16T06:54:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"text Classification",
"license:apache-2.0"
]
| text-classification | false | Sakil | null | Sakil/imdbsentdistilbertmodel | 13 | null | transformers | 10,128 | ---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "I like you. </s></s> I love you."
---
* IMDBSentimentDistilBertModel:
- I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
|
SauravMaheshkar/clr-finetuned-bert-large-uncased | 46e4fec06da3622e3104117e50b01287ac654bb7 | 2021-09-23T15:57:39.000Z | [
"pytorch",
"bert",
"fill-mask",
"dataset:Commonlit-Readibility",
"transformers",
"kaggle",
"license:cc0-1.0",
"autotrain_compatible"
]
| fill-mask | false | SauravMaheshkar | null | SauravMaheshkar/clr-finetuned-bert-large-uncased | 13 | null | transformers | 10,129 | ---
thumbnail: https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true
tags:
- kaggle
license: cc0-1.0
datasets:
- Commonlit-Readibility
---

# FineTuning
| **Architecture** | **Weights** | **Training Loss** | **Validation Loss** |
|:-----------------------:|:---------------:|:----------------:|:----------------------:|
| roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** |
| bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 |
| albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 |
| xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 |
| bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 |
| albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 |
| roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
|
SetFit/distilbert-base-uncased__sst2__train-16-4 | d478815b58d84df9c998d504f66c73d149f6ebfc | 2022-02-10T07:22:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | SetFit | null | SetFit/distilbert-base-uncased__sst2__train-16-4 | 13 | null | transformers | 10,130 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.6387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 |
| 0.68 | 2.0 | 14 | 0.7398 | 0.2857 |
| 0.641 | 3.0 | 21 | 0.7723 | 0.2857 |
| 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 |
| 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 |
| 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 |
| 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 |
| 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 |
| 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 |
| 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 |
| 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 |
| 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 |
| 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 |
| 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 |
| 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 |
| 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 |
| 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 |
| 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 |
| 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 |
| 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 |
| 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 |
| 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 |
| 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 |
| 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 |
| 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 |
| 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 |
| 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 |
| 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 |
| 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 |
| 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-BioNLP13 | 3f77a10a6ee664b2bdf98bc714c0570f18f18024 | 2022-02-23T01:06:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-BioNLP13 | 13 | null | transformers | 10,131 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-BioNLP13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-BioNLP13
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2042
- Precision: 0.9550
- Recall: 0.9559
- F1: 0.9555
- Accuracy: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3114 | 1.0 | 692 | 0.1693 | 0.9453 | 0.9452 | 0.9453 | 0.9461 |
| 0.1292 | 2.0 | 1384 | 0.1754 | 0.9492 | 0.9525 | 0.9509 | 0.9508 |
| 0.0522 | 3.0 | 2076 | 0.1895 | 0.9529 | 0.9540 | 0.9534 | 0.9530 |
| 0.032 | 4.0 | 2768 | 0.2042 | 0.9550 | 0.9559 | 0.9555 | 0.9552 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
TehranNLP-org/roberta-base-qqp-2e-5-42 | d8a2f9142761e327dd555479bbf9000da06e717c | 2021-08-18T01:48:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | TehranNLP-org | null | TehranNLP-org/roberta-base-qqp-2e-5-42 | 13 | null | transformers | 10,132 | Entry not found |
ThaiUWA/py_just_rumour | ebb906fa63bd5966a0379247a61607d7b3ec96b9 | 2021-05-21T11:24:26.000Z | [
"pytorch",
"jax",
"gpt2",
"feature-extraction",
"transformers"
]
| feature-extraction | false | ThaiUWA | null | ThaiUWA/py_just_rumour | 13 | null | transformers | 10,133 | Entry not found |
Tommy930/distilbert-base-uncased-finetuned-emotion | b190323c070ede64728720e02083961cd484b718 | 2022-02-13T04:43:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Tommy930 | null | Tommy930/distilbert-base-uncased-finetuned-emotion | 13 | null | transformers | 10,134 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.9193144250513821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.919
- F1: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7858 | 1.0 | 250 | 0.3034 | 0.9085 | 0.9073 |
| 0.243 | 2.0 | 500 | 0.2220 | 0.919 | 0.9193 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
WangZeJun/roformer-sim-small-chinese | 9e9b7bedbb18d2974d05a786ee0ee14b89be749d | 2022-06-14T09:17:44.000Z | [
"pytorch",
"transformers"
]
| null | false | WangZeJun | null | WangZeJun/roformer-sim-small-chinese | 13 | null | transformers | 10,135 | https://github.com/zejunwang1/bert4vec |
Wende/bert-finetuned-ner1 | 7c025c71ee70c563d71a15a591858524d1a25f25 | 2021-12-23T15:22:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | Wende | null | Wende/bert-finetuned-ner1 | 13 | null | transformers | 10,136 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9285832096321953
- name: Recall
type: recall
value: 0.9474924267923258
- name: F1
type: f1
value: 0.9379425239483548
- name: Accuracy
type: accuracy
value: 0.9859009831047272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0584
- Precision: 0.9286
- Recall: 0.9475
- F1: 0.9379
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2183 | 1.0 | 878 | 0.0753 | 0.9087 | 0.9291 | 0.9188 | 0.9800 |
| 0.0462 | 2.0 | 1756 | 0.0614 | 0.9329 | 0.9470 | 0.9399 | 0.9858 |
| 0.0244 | 3.0 | 2634 | 0.0584 | 0.9286 | 0.9475 | 0.9379 | 0.9859 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Wikidepia/SB-AutoSegment | 56d5349e734c305764dee3937ddcc241b71e772f | 2021-12-26T02:51:08.000Z | [
"pytorch",
"en",
"flair",
"token-classification",
"sequence-tagger-model"
]
| token-classification | false | Wikidepia | null | Wikidepia/SB-AutoSegment | 13 | null | flair | 10,137 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
---
# SponsorBlock Auto Segment |
Wikidepia/indobert-lite-squadx | 6b7593e0f8f33d8b2ca61475b1ef22c9ecab5caf | 2021-03-31T13:28:04.000Z | [
"pytorch",
"albert",
"question-answering",
"id",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Wikidepia | null | Wikidepia/indobert-lite-squadx | 13 | null | transformers | 10,138 | ---
language: id
widget:
- text: "Kapan Einstein melepas kewarganegaraan Jerman?"
context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900."
---
# IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/squad) for **Q&A** downstream task.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import BertTokenizerFast, pipeline
tokenizer = BertTokenizerFast.from_pretrained(
'Wikidepia/indobert-lite-squad'
)
qa_pipeline = pipeline(
"question-answering",
model="Wikidepia/indobert-lite-squad",
tokenizer=tokenizer
)
qa_pipeline({
'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.",
'question': "Kapan Einstein melepas kewarganegaraan Jerman?"
})
```
# Output:
```json
{
"score": 0.9169162511825562,
"start": 147,
"end": 151,
"answer": "1896"
}
```
README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) |
Worldman/distilbert-base-uncased-finetuned-emotion | f292dc243f0c210c262e7a0dde75cf6df3a0731e | 2022-02-20T21:29:06.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Worldman | null | Worldman/distilbert-base-uncased-finetuned-emotion | 13 | null | transformers | 10,139 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9227046184638882
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9225
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8437 | 1.0 | 250 | 0.3153 | 0.903 | 0.9005 |
| 0.2467 | 2.0 | 500 | 0.2162 | 0.9225 | 0.9227 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.0
|
adamlin/recipe-tag-model | ba5e8ca161f3060d96d5e1ca432dc9329047d095 | 2021-07-25T06:33:50.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | adamlin | null | adamlin/recipe-tag-model | 13 | null | transformers | 10,140 | Entry not found |
addy88/argument-classifier | 85e9d628bb475aa93c109ddc96517fee5d62a881 | 2022-01-02T06:32:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | addy88 | null | addy88/argument-classifier | 13 | null | transformers | 10,141 | Entry not found |
adzcodez/TokenClassificationTest | cbb0edbd18f8276b455c1f37ca685007f6441531 | 2021-03-16T14:18:09.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | adzcodez | null | adzcodez/TokenClassificationTest | 13 | null | transformers | 10,142 | distilbert-base-uncased finetuned on the conll2003 dataset for NER. |
ajrae/bert-base-uncased-finetuned-cola | fb5d3ed8b62c494aad229a326d13c20763a70428 | 2022-02-21T21:40:59.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ajrae | null | ajrae/bert-base-uncased-finetuned-cola | 13 | null | transformers | 10,143 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5864941797290588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4887 | 1.0 | 535 | 0.5016 | 0.5107 |
| 0.286 | 2.0 | 1070 | 0.5473 | 0.5399 |
| 0.1864 | 3.0 | 1605 | 0.7114 | 0.5706 |
| 0.1163 | 4.0 | 2140 | 0.8385 | 0.5865 |
| 0.0834 | 5.0 | 2675 | 0.9610 | 0.5786 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
alireza7/TRANSFORMER-persian-base-wiki-summary | 73350d200502f8386796a991f94c51a125afe1dc | 2021-09-29T19:27:06.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | alireza7 | null | alireza7/TRANSFORMER-persian-base-wiki-summary | 13 | null | transformers | 10,144 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
andrewlitv/distilbert-base-uncased-finetuned-cola | 6a4ea00e2d87ba7421fa7e4d1d2cf0942ab3eab1 | 2022-06-23T14:31:04.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | andrewlitv | null | andrewlitv/distilbert-base-uncased-finetuned-cola | 13 | null | transformers | 10,145 | Entry not found |
anindabitm/sagemaker-distilbert-emotion | 7ed8bedbc8bfdd17541452970290b79e146a3abf | 2021-11-18T17:43:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | anindabitm | null | anindabitm/sagemaker-distilbert-emotion | 13 | null | transformers | 10,146 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2434
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
aseifert/t5-base-jfleg-wi | 8de73b607da012873b864af74c079b9ed10fe3dc | 2021-11-19T20:42:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | aseifert | null | aseifert/t5-base-jfleg-wi | 13 | null | transformers | 10,147 | Entry not found |
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa | e905df12f6edea2187007c3cc41d06950cb8b9fa | 2021-12-22T10:34:33.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:indonlu",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ayameRushia | null | ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa | 13 | null | transformers | 10,148 | ---
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-1.5G-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9261904761904762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-1.5G-sentiment-analysis-smsa
This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4294
- Accuracy: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 |
| 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 |
| 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 |
| 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 |
| 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 |
| 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 |
| 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 |
| 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 |
| 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 |
| 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
world-wide/sent-sci-irrelevance | 55195be78960bcd6f42ef4b076b8ec53a4b2ca7b | 2021-11-27T14:16:04.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:bozelosp/autonlp-data-sci-relevance",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | world-wide | null | world-wide/sent-sci-irrelevance | 13 | 1 | transformers | 10,149 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- bozelosp/autonlp-data-sci-relevance
co2_eq_emissions: 3.667033499762825
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 33199029
- CO2 Emissions (in grams): 3.667033499762825
## Validation Metrics
- Loss: 0.32653310894966125
- Accuracy: 0.9133333333333333
- Precision: 0.9005847953216374
- Recall: 0.9447852760736196
- AUC: 0.9532488468944517
- F1: 0.9221556886227544
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bozelosp/autonlp-sci-relevance-33199029
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bozelosp/autonlp-sci-relevance-33199029", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
cahya/gpt2-small-indonesian-personachat-empathetic | 4f6a14c2d2357c9017f177215aae5369b724d44a | 2022-02-12T00:06:38.000Z | [
"pytorch",
"gpt2",
"transformers"
]
| null | false | cahya | null | cahya/gpt2-small-indonesian-personachat-empathetic | 13 | null | transformers | 10,150 | Entry not found |
ccdv/lsg-legal-base-uncased-4096 | e138abd907d58ed15b1672db7f0e801ee1078148 | 2022-07-25T05:28:29.000Z | [
"pytorch",
"bert",
"en",
"transformers",
"long context",
"legal",
"fill-mask"
]
| fill-mask | false | ccdv | null | ccdv/lsg-legal-base-uncased-4096 | 13 | null | transformers | 10,151 | ---
language: en
tags:
- long context
- legal
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is adapted from [LEGAL-BERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/legal-lsg-base-uncased-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/legal-lsg-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/legal-lsg-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/legal-lsg-base-uncased-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
**LEGAL-BERT**
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
``` |
ceyda/wav2vec2-large-xlsr-53-turkish | bca1ede6d3fc08ba66e56eece3f6e54fab7cc78a | 2021-07-06T00:18:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ceyda | null | ceyda/wav2vec2-large-xlsr-53-turkish | 13 | 1 | transformers | 10,152 | ---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Turkish by Ceyda Cinarel
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 27.59
---
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("ceyda/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\]\[\’»«]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.59 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/cceyda/wav2vec2) |
chitra/finetuned-adversarial-paraphrase-modell | 362ddb41356b4451e0e9276b8240f5c386e08db4 | 2022-01-19T13:11:27.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | chitra | null | chitra/finetuned-adversarial-paraphrase-modell | 13 | null | transformers | 10,153 | Entry not found |
copypress/copypress | bf0e6fd8f1df7b70b71575cd0bbcad0813200af1 | 2021-06-12T17:46:29.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | copypress | null | copypress/copypress | 13 | null | transformers | 10,154 | Entry not found |
creat89/NER_FEDA_Cs | 40ab9e767ec8655e1a04ff220a00cb9be3f9e62c | 2022-04-13T09:38:35.000Z | [
"pytorch",
"bert",
"multilingual",
"cs",
"transformers",
"labse",
"ner",
"license:mit"
]
| null | false | creat89 | null | creat89/NER_FEDA_Cs | 13 | null | transformers | 10,155 | ---
license: mit
language:
- multilingual
- cs
tags:
- labse
- ner
---
This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:
1. Wikiann (LOC, PER, ORG)
2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
3. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
4. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)
PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date
You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.
More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA). |
d8oss/gamio-small | 4d9e726ea9b40fb2f192f277b99bc3785e4f169b | 2021-09-14T12:35:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | d8oss | null | d8oss/gamio-small | 13 | null | transformers | 10,156 | Entry not found |
danasone/rubert-tiny-speech | a39c56b321efa50b3bd2191c81dd84af022f73c8 | 2022-02-10T15:18:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | danasone | null | danasone/rubert-tiny-speech | 13 | null | transformers | 10,157 | Entry not found |
dehio/german-qg-t5-quad | e5eeeeaef49576b5679469f2d186971e4f647ea7 | 2022-01-19T16:36:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"dataset:deepset/germanquad",
"transformers",
"question generation",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | dehio | null | dehio/german-qg-t5-quad | 13 | null | transformers | 10,158 | ---
license: mit
widget:
- text: "Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl>britischen Common Laws<hl> sind, setzt sich das amerikanische Recht bedeutend davon ab."
language:
- de
tags:
- question generation
datasets:
- deepset/germanquad
model-index:
- name: german-qg-t5-quad
results: []
---
# german-qg-t5-quad
This model is fine-tuned in question generation in German. The expected answer must be highlighted with a
<hl> token.
## Task example
#### Input
generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]
#### Expected output
Von welchem Gesetzt stammt das Amerikanische ab?
## Model description
This model is a fine-tuned version of [valhalla/t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) on the [GermanQUAD](https://www.deepset.ai/germanquad) dataset.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
### Evaluation
The model achieves a BLEU-4 score of **11.30** on the GermanQuAD test set (n=2204).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dkhara/bert-news | 04501f75f31954b526433c44918df03ab53611c3 | 2021-04-28T15:38:51.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | dkhara | null | dkhara/bert-news | 13 | null | transformers | 10,159 | ### Bert-News |
dmiller1/distilbert-base-uncased-finetuned-emotion | 6cdc9e0c15af88e83af3688fecc3c7fcece0f2b3 | 2022-01-18T03:59:30.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dmiller1 | null | dmiller1/distilbert-base-uncased-finetuned-emotion | 13 | null | transformers | 10,160 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261144741040841
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 |
| 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.7.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
doc2query/stackexchange-title-body-t5-small-v1 | 8c3d6d603687f5a069707e775da5fed1128d1e17 | 2022-01-07T08:33:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | doc2query | null | doc2query/stackexchange-title-body-t5-small-v1 | 13 | null | transformers | 10,161 | ---
language: en
datasets:
- flax-sentence-embeddings/stackexchange_title_body_jsonl
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/stackexchange-title-body-t5-small-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-small-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 321k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
efederici/it5-base-summarization | 7f5c9afdc546f91bd4f74b1494a13e627ab9003b | 2021-09-30T19:00:46.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"it",
"transformers",
"summarization",
"autotrain_compatible"
]
| summarization | false | efederici | null | efederici/it5-base-summarization | 13 | null | transformers | 10,162 | ---
language:
- it
tags:
- summarization
---
# **Italian T5 Abstractive Summarization**
gsarti/it5-base fine-tuned in italian for abstractive text summarization. |
elgeish/cs224n-squad2.0-albert-large-v2 | eaf92e70220ca484217941f33d044ffb4ad9de7c | 2020-12-11T21:38:57.000Z | [
"pytorch",
"albert",
"question-answering",
"arxiv:2004.07067",
"transformers",
"exbert",
"autotrain_compatible"
]
| question-answering | false | elgeish | null | elgeish/cs224n-squad2.0-albert-large-v2 | 13 | null | transformers | 10,163 | ---
tags:
- exbert
---
## CS224n SQuAD2.0 Project Dataset
The goal of this model is to save CS224n students GPU time when establishing
baselines to beat for the [Default Final Project](http://web.stanford.edu/class/cs224n/project/default-final-project-handout.pdf).
The training set used to fine-tune this model is the same as
the [official one](https://rajpurkar.github.io/SQuAD-explorer/); however,
evaluation and model selection were performed using roughly half of the official
dev set, 6078 examples, picked at random. The data files can be found at
<https://github.com/elgeish/squad/tree/master/data> — this is the Winter 2020
version. Given that the official SQuAD2.0 dev set contains the project's test
set, students must make sure not to use the official SQuAD2.0 dev set in any way
— including the use of models fine-tuned on the official SQuAD2.0, since they
used the official SQuAD2.0 dev set for model selection.
<a href="https://huggingface.co/exbert/?model=elgeish/cs224n-squad2.0-albert-large-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
## Results
```json
{
"exact": 79.2694965449161,
"f1": 82.50844352970152,
"total": 6078,
"HasAns_exact": 74.87972508591065,
"HasAns_f1": 81.64478342732858,
"HasAns_total": 2910,
"NoAns_exact": 83.30176767676768,
"NoAns_f1": 83.30176767676768,
"NoAns_total": 3168,
"best_exact": 79.2694965449161,
"best_exact_thresh": 0.0,
"best_f1": 82.50844352970155,
"best_f1_thresh": 0.0
}
```
## Notable Arguments
```json
{
"do_lower_case": true,
"doc_stride": 128,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 1,
"learning_rate": 3e-05,
"max_answer_length": 30,
"max_grad_norm": 1,
"max_query_length": 64,
"max_seq_length": 384,
"model_name_or_path": "albert-large-v2",
"model_type": "albert",
"num_train_epochs": 5,
"per_gpu_train_batch_size": 8,
"save_steps": 5000,
"seed": 42,
"train_batch_size": 8,
"version_2_with_negative": true,
"warmup_steps": 0,
"weight_decay": 0
}
```
## Environment Setup
```json
{
"transformers": "2.5.1",
"pytorch": "1.4.0=py3.6_cuda10.1.243_cudnn7.6.3_0",
"python": "3.6.5=hc3d631a_2",
"os": "Linux 4.15.0-1060-aws #62-Ubuntu SMP Tue Feb 11 21:23:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux",
"gpu": "Tesla V100-SXM2-16GB"
}
```
## How to Cite
```BibTeX
@misc{elgeish2020gestalt,
title={Gestalt: a Stacking Ensemble for SQuAD2.0},
author={Mohamed El-Geish},
journal={arXiv e-prints},
archivePrefix={arXiv},
eprint={2004.07067},
year={2020},
}
```
## Related Models
* [elgeish/cs224n-squad2.0-albert-base-v2](https://huggingface.co/elgeish/cs224n-squad2.0-albert-base-v2)
* [elgeish/cs224n-squad2.0-albert-xxlarge-v1](https://huggingface.co/elgeish/cs224n-squad2.0-albert-xxlarge-v1)
* [elgeish/cs224n-squad2.0-distilbert-base-uncased](https://huggingface.co/elgeish/cs224n-squad2.0-distilbert-base-uncased)
* [elgeish/cs224n-squad2.0-roberta-base](https://huggingface.co/elgeish/cs224n-squad2.0-roberta-base)
|
eliza-dukim/bert-base-finetuned-ynat | 7f45e1b501107d4184f6fa3ce1267b584f25968b | 2021-08-04T10:03:32.000Z | [
"pytorch",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer"
]
| text-classification | false | eliza-dukim | null | eliza-dukim/bert-base-finetuned-ynat | 13 | null | transformers | 10,164 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model_index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metric:
name: F1
type: f1
value: 0.8699556378491373
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- F1: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4458 | 0.8516 |
| No log | 2.0 | 358 | 0.3741 | 0.8700 |
| 0.385 | 3.0 | 537 | 0.3720 | 0.8693 |
| 0.385 | 4.0 | 716 | 0.3744 | 0.8689 |
| 0.385 | 5.0 | 895 | 0.3801 | 0.8695 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ensamblador/gpt2-derecha-with-bos-eos-48heads | 2d81a1a1cdc99d3f4fe2390d8c8ae16e4d7bee1c | 2021-05-21T15:49:43.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ensamblador | null | ensamblador/gpt2-derecha-with-bos-eos-48heads | 13 | null | transformers | 10,165 | Entry not found |
ensamblador/gpt2-es-48heads | 5b4bc3c3af930ef4a4580ce2c475f0d5df973e96 | 2021-05-21T15:52:15.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ensamblador | null | ensamblador/gpt2-es-48heads | 13 | null | transformers | 10,166 | Entry not found |
ethzanalytics/ai-msgbot-gpt2-XL | 5a20cc1dd6195563d5666c0aa6f963a9b104423b | 2022-01-20T01:40:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:natural questions",
"transformers",
"gpt",
"license:mit"
]
| text-generation | false | ethzanalytics | null | ethzanalytics/ai-msgbot-gpt2-XL | 13 | null | transformers | 10,167 | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "Do you like my new haircut?\nperson beta:\n\n"
example_title: "haircut"
- text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n"
example_title: "teaching"
- text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n"
example_title: "favorite"
- text: "how much does it cost?\nperson beta:\n\n"
example_title: "money"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?person beta:
yes, i like fried beans.
person alpha:
i wonder when the first beans were cultivated and how they were processed.
person beta:
nitrogenic bacteria (in
```
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
``` |
fdominik98/ner-hu-model-2021 | 221698e40344214254a35055a8dc4ae3d78a4c12 | 2021-12-08T21:34:31.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | fdominik98 | null | fdominik98/ner-hu-model-2021 | 13 | null | transformers | 10,168 | Magyar nyelvű token classification feladatra felkészített BERT modell. |
flax-community/roberta-swahili-news-classification | 415bba1cd4d71d431477a7013b6f627297325b6c | 2021-07-25T10:52:45.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"text-classification",
"sw",
"dataset:flax-community/swahili-safi",
"transformers"
]
| text-classification | false | flax-community | null | flax-community/roberta-swahili-news-classification | 13 | null | transformers | 10,169 | ---
language: sw
widget:
- text: "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"
datasets:
- flax-community/swahili-safi
---
## Swahili News Classification with RoBERTa
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
This [model](https://huggingface.co/flax-community/roberta-swahili) was used as the base and fine-tuned for this task.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flax-community/roberta-swahili-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("flax-community/roberta-swahili-news-classification")
```
```
Eval metrics: {'accuracy': 0.9153416415986249}
```
|
ghadeermobasher/BC4_Modified-bluebert_pubmed_uncased_L-12_H-768_A-12 | 235adf5fa88c727b6fb3df8fb07339e87d1e7e56 | 2022-02-22T20:08:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4_Modified-bluebert_pubmed_uncased_L-12_H-768_A-12 | 13 | null | transformers | 10,170 | Entry not found |
glob-asr/xls-r-es-test-lm | a1d118795c3350b3fb2876e4d30cf29cdbe4ffe7 | 2022-03-23T18:26:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | glob-asr | null | glob-asr/xls-r-es-test-lm | 13 | null | transformers | 10,171 | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-es-test-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: es
metrics:
- name: Test WER
type: wer
value: 9.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 27.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 30.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-es-test-lm
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ES dataset.
It achieves the following results on the test set with lm model:
- Loss: 0.1304
- WER: 0.094
- CER: 0.031
It achieves the following results on the val set with lm model:
- Loss: 0.1304
- WER: 0.081
- CER: 0.025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9613 | 0.07 | 500 | 2.9647 | 1.0 |
| 2.604 | 0.14 | 1000 | 1.8300 | 0.9562 |
| 1.177 | 0.21 | 1500 | 0.3652 | 0.3077 |
| 1.0745 | 0.28 | 2000 | 0.2707 | 0.2504 |
| 1.0103 | 0.35 | 2500 | 0.2338 | 0.2157 |
| 0.9858 | 0.42 | 3000 | 0.2321 | 0.2129 |
| 0.974 | 0.49 | 3500 | 0.2164 | 0.2031 |
| 0.9699 | 0.56 | 4000 | 0.2078 | 0.1970 |
| 0.9513 | 0.63 | 4500 | 0.2173 | 0.2139 |
| 0.9657 | 0.7 | 5000 | 0.2050 | 0.1979 |
| 0.9484 | 0.77 | 5500 | 0.2008 | 0.1919 |
| 0.9317 | 0.84 | 6000 | 0.2012 | 0.1911 |
| 0.9366 | 0.91 | 6500 | 0.2024 | 0.1976 |
| 0.9242 | 0.98 | 7000 | 0.2062 | 0.2028 |
| 0.9138 | 1.05 | 7500 | 0.1924 | 0.1863 |
| 0.921 | 1.12 | 8000 | 0.1935 | 0.1836 |
| 0.9117 | 1.19 | 8500 | 0.1887 | 0.1815 |
| 0.9064 | 1.26 | 9000 | 0.1909 | 0.1839 |
| 0.9118 | 1.32 | 9500 | 0.1869 | 0.1830 |
| 0.9121 | 1.39 | 10000 | 0.1863 | 0.1802 |
| 0.9048 | 1.46 | 10500 | 0.1845 | 0.1791 |
| 0.8955 | 1.53 | 11000 | 0.1863 | 0.1774 |
| 0.8947 | 1.6 | 11500 | 0.1907 | 0.1814 |
| 0.9073 | 1.67 | 12000 | 0.1892 | 0.1853 |
| 0.8927 | 1.74 | 12500 | 0.1821 | 0.1750 |
| 0.8732 | 1.81 | 13000 | 0.1815 | 0.1768 |
| 0.8761 | 1.88 | 13500 | 0.1822 | 0.1749 |
| 0.8751 | 1.95 | 14000 | 0.1789 | 0.1715 |
| 0.8889 | 2.02 | 14500 | 0.1819 | 0.1791 |
| 0.8864 | 2.09 | 15000 | 0.1826 | 0.1794 |
| 0.886 | 2.16 | 15500 | 0.1788 | 0.1776 |
| 0.8915 | 2.23 | 16000 | 0.1756 | 0.1719 |
| 0.8689 | 2.3 | 16500 | 0.1769 | 0.1711 |
| 0.879 | 2.37 | 17000 | 0.1777 | 0.1739 |
| 0.8692 | 2.44 | 17500 | 0.1765 | 0.1705 |
| 0.8504 | 2.51 | 18000 | 0.1699 | 0.1652 |
| 0.8728 | 2.58 | 18500 | 0.1705 | 0.1694 |
| 0.8523 | 2.65 | 19000 | 0.1674 | 0.1645 |
| 0.8513 | 2.72 | 19500 | 0.1661 | 0.1611 |
| 0.8498 | 2.79 | 20000 | 0.1660 | 0.1631 |
| 0.8432 | 2.86 | 20500 | 0.1636 | 0.1610 |
| 0.8492 | 2.93 | 21000 | 0.1708 | 0.1688 |
| 0.8561 | 3.0 | 21500 | 0.1663 | 0.1604 |
| 0.842 | 3.07 | 22000 | 0.1690 | 0.1625 |
| 0.857 | 3.14 | 22500 | 0.1642 | 0.1605 |
| 0.8518 | 3.21 | 23000 | 0.1626 | 0.1585 |
| 0.8506 | 3.28 | 23500 | 0.1651 | 0.1605 |
| 0.8394 | 3.35 | 24000 | 0.1647 | 0.1585 |
| 0.8431 | 3.42 | 24500 | 0.1632 | 0.1573 |
| 0.8566 | 3.49 | 25000 | 0.1614 | 0.1550 |
| 0.8534 | 3.56 | 25500 | 0.1645 | 0.1589 |
| 0.8386 | 3.63 | 26000 | 0.1632 | 0.1582 |
| 0.8357 | 3.7 | 26500 | 0.1631 | 0.1556 |
| 0.8299 | 3.77 | 27000 | 0.1612 | 0.1550 |
| 0.8421 | 3.84 | 27500 | 0.1602 | 0.1552 |
| 0.8375 | 3.91 | 28000 | 0.1592 | 0.1537 |
| 0.8328 | 3.97 | 28500 | 0.1587 | 0.1537 |
| 0.8155 | 4.04 | 29000 | 0.1587 | 0.1520 |
| 0.8335 | 4.11 | 29500 | 0.1624 | 0.1556 |
| 0.8138 | 4.18 | 30000 | 0.1581 | 0.1547 |
| 0.8195 | 4.25 | 30500 | 0.1560 | 0.1507 |
| 0.8092 | 4.32 | 31000 | 0.1561 | 0.1534 |
| 0.8191 | 4.39 | 31500 | 0.1549 | 0.1493 |
| 0.8008 | 4.46 | 32000 | 0.1540 | 0.1493 |
| 0.8138 | 4.53 | 32500 | 0.1544 | 0.1493 |
| 0.8173 | 4.6 | 33000 | 0.1553 | 0.1511 |
| 0.8081 | 4.67 | 33500 | 0.1541 | 0.1484 |
| 0.8192 | 4.74 | 34000 | 0.1560 | 0.1506 |
| 0.8068 | 4.81 | 34500 | 0.1540 | 0.1503 |
| 0.8105 | 4.88 | 35000 | 0.1529 | 0.1483 |
| 0.7976 | 4.95 | 35500 | 0.1507 | 0.1451 |
| 0.8143 | 5.02 | 36000 | 0.1505 | 0.1462 |
| 0.8053 | 5.09 | 36500 | 0.1517 | 0.1476 |
| 0.785 | 5.16 | 37000 | 0.1526 | 0.1478 |
| 0.7936 | 5.23 | 37500 | 0.1489 | 0.1421 |
| 0.807 | 5.3 | 38000 | 0.1483 | 0.1420 |
| 0.8092 | 5.37 | 38500 | 0.1481 | 0.1435 |
| 0.793 | 5.44 | 39000 | 0.1503 | 0.1438 |
| 0.814 | 5.51 | 39500 | 0.1495 | 0.1480 |
| 0.807 | 5.58 | 40000 | 0.1472 | 0.1424 |
| 0.7913 | 5.65 | 40500 | 0.1471 | 0.1422 |
| 0.7844 | 5.72 | 41000 | 0.1473 | 0.1422 |
| 0.7888 | 5.79 | 41500 | 0.1445 | 0.1385 |
| 0.7806 | 5.86 | 42000 | 0.1435 | 0.1394 |
| 0.7773 | 5.93 | 42500 | 0.1461 | 0.1424 |
| 0.786 | 6.0 | 43000 | 0.1450 | 0.1413 |
| 0.7784 | 6.07 | 43500 | 0.1463 | 0.1424 |
| 0.7937 | 6.14 | 44000 | 0.1438 | 0.1386 |
| 0.7738 | 6.21 | 44500 | 0.1437 | 0.1383 |
| 0.7728 | 6.28 | 45000 | 0.1424 | 0.1371 |
| 0.7681 | 6.35 | 45500 | 0.1416 | 0.1376 |
| 0.776 | 6.42 | 46000 | 0.1415 | 0.1380 |
| 0.7773 | 6.49 | 46500 | 0.1416 | 0.1371 |
| 0.7692 | 6.56 | 47000 | 0.1398 | 0.1345 |
| 0.7642 | 6.62 | 47500 | 0.1381 | 0.1341 |
| 0.7692 | 6.69 | 48000 | 0.1392 | 0.1334 |
| 0.7667 | 6.76 | 48500 | 0.1392 | 0.1348 |
| 0.7712 | 6.83 | 49000 | 0.1398 | 0.1333 |
| 0.7628 | 6.9 | 49500 | 0.1392 | 0.1344 |
| 0.7622 | 6.97 | 50000 | 0.1377 | 0.1329 |
| 0.7639 | 7.04 | 50500 | 0.1361 | 0.1316 |
| 0.742 | 7.11 | 51000 | 0.1376 | 0.1327 |
| 0.7526 | 7.18 | 51500 | 0.1387 | 0.1342 |
| 0.7606 | 7.25 | 52000 | 0.1363 | 0.1316 |
| 0.7626 | 7.32 | 52500 | 0.1365 | 0.1313 |
| 0.752 | 7.39 | 53000 | 0.1354 | 0.1309 |
| 0.7562 | 7.46 | 53500 | 0.1362 | 0.1312 |
| 0.7557 | 7.53 | 54000 | 0.1358 | 0.1325 |
| 0.7588 | 7.6 | 54500 | 0.1343 | 0.1311 |
| 0.7485 | 7.67 | 55000 | 0.1346 | 0.1301 |
| 0.7466 | 7.74 | 55500 | 0.1354 | 0.1314 |
| 0.7558 | 7.81 | 56000 | 0.1359 | 0.1325 |
| 0.7578 | 7.88 | 56500 | 0.1363 | 0.1334 |
| 0.7411 | 7.95 | 57000 | 0.1346 | 0.1301 |
| 0.7478 | 8.02 | 57500 | 0.1355 | 0.1305 |
| 0.7451 | 8.09 | 58000 | 0.1349 | 0.1302 |
| 0.7383 | 8.16 | 58500 | 0.1349 | 0.1294 |
| 0.7482 | 8.23 | 59000 | 0.1341 | 0.1293 |
| 0.742 | 8.3 | 59500 | 0.1338 | 0.1296 |
| 0.7343 | 8.37 | 60000 | 0.1348 | 0.1307 |
| 0.7385 | 8.44 | 60500 | 0.1324 | 0.1282 |
| 0.7567 | 8.51 | 61000 | 0.1334 | 0.1281 |
| 0.7342 | 8.58 | 61500 | 0.1338 | 0.1289 |
| 0.7401 | 8.65 | 62000 | 0.1331 | 0.1285 |
| 0.7362 | 8.72 | 62500 | 0.1329 | 0.1283 |
| 0.7241 | 8.79 | 63000 | 0.1323 | 0.1277 |
| 0.7244 | 8.86 | 63500 | 0.1317 | 0.1269 |
| 0.7274 | 8.93 | 64000 | 0.1308 | 0.1260 |
| 0.7411 | 9.0 | 64500 | 0.1309 | 0.1256 |
| 0.7255 | 9.07 | 65000 | 0.1316 | 0.1265 |
| 0.7406 | 9.14 | 65500 | 0.1315 | 0.1270 |
| 0.7418 | 9.21 | 66000 | 0.1315 | 0.1269 |
| 0.7301 | 9.27 | 66500 | 0.1315 | 0.1273 |
| 0.7248 | 9.34 | 67000 | 0.1323 | 0.1274 |
| 0.7423 | 9.41 | 67500 | 0.1309 | 0.1267 |
| 0.7152 | 9.48 | 68000 | 0.1312 | 0.1271 |
| 0.7295 | 9.55 | 68500 | 0.1306 | 0.1262 |
| 0.7231 | 9.62 | 69000 | 0.1308 | 0.1263 |
| 0.7344 | 9.69 | 69500 | 0.1313 | 0.1267 |
| 0.7264 | 9.76 | 70000 | 0.1305 | 0.1263 |
| 0.7309 | 9.83 | 70500 | 0.1303 | 0.1262 |
| 0.73 | 9.9 | 71000 | 0.1303 | 0.1261 |
| 0.7353 | 9.97 | 71500 | 0.1304 | 0.1260 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
google/t5-efficient-small-el16 | 255572c8fd526d7034bccd2bd2fa82ce6ca55bcb | 2022-02-15T10:57:43.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-small-el16 | 13 | 1 | transformers | 10,172 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-SMALL-EL16 (Deep-Narrow version)
T5-Efficient-SMALL-EL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-el16** - is of model type **Small** with the following variations:
- **el** is **16**
It has **92.0** million parameters and thus requires *ca.* **367.99 MB** of memory in full precision (*fp32*)
or **183.99 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/t5-efficient-xl-nl12 | 290c6580f196abe58d1ac72d3f5ac01461e6f5f1 | 2022-02-15T10:57:37.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"arxiv:2109.10686",
"transformers",
"deep-narrow",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | google | null | google/t5-efficient-xl-nl12 | 13 | 1 | transformers | 10,173 | ---
language:
- en
datasets:
- c4
tags:
- deep-narrow
inference: false
license: apache-2.0
---
# T5-Efficient-XL-NL12 (Deep-Narrow version)
T5-Efficient-XL-NL12 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-xl-nl12** - is of model type **Xl** with the following variations:
- **nl** is **12**
It has **1442.28** million parameters and thus requires *ca.* **5769.12 MB** of memory in full precision (*fp32*)
or **2884.56 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. |
google/tapas-mini-finetuned-sqa | a96c94625773691bf48c424d4f3c3079d869fe34 | 2021-11-29T13:10:09.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
]
| table-question-answering | false | google | null | google/tapas-mini-finetuned-sqa | 13 | 1 | transformers | 10,174 | ---
language: en
tags:
- tapas
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS mini model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
BASE | noreset | 0.6737 | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
BASE | reset | 0.6874 | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
**MINI** | **noreset** | **0.4574** | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
**MINI** | **reset** | **0.5148** | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
``` |
hfl/chinese-electra-180g-large-generator | e3bbab438ed06d3372e2c118f3ad86ea73e65376 | 2021-03-03T01:27:24.000Z | [
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
]
| fill-mask | false | hfl | null | hfl/chinese-electra-180g-large-generator | 13 | null | transformers | 10,175 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
howey/electra-large-squad2 | 721cb8eafe5e3b0cfdaea49f10fb20fbcb63a54b | 2021-06-15T03:49:42.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | howey | null | howey/electra-large-squad2 | 13 | null | transformers | 10,176 | Entry not found |
huggingartists/adele | 2b69ef91081a5c6922fce8bb8e5a3bc489f879fd | 2021-10-20T04:50:21.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/adele",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/adele | 13 | null | transformers | 10,177 | ---
language: en
datasets:
- huggingartists/adele
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/4c3ac1f1d845d251671a892309b5f9b5.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Adele</div>
<a href="https://genius.com/artists/adele">
<div style="text-align: center; font-size: 14px;">@adele</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Adele.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/adele).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/adele")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1yyqw6ss/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Adele's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3qruwjpr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3qruwjpr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/adele')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/adele")
model = AutoModelWithLMHead.from_pretrained("huggingartists/adele")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/muse | bc4aa5d490ee7a3aa94762b67059377ea787b82e | 2021-09-23T11:41:30.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/muse",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/muse | 13 | null | transformers | 10,178 | ---
language: en
datasets:
- huggingartists/muse
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/26f575585ec649d88d09a1e402bb936b.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Muse</div>
<a href="https://genius.com/artists/muse">
<div style="text-align: center; font-size: 14px;">@muse</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Muse.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/muse).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/muse")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3w58rwod/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Muse's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3j03atcr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3j03atcr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/muse')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/muse")
model = AutoModelWithLMHead.from_pretrained("huggingartists/muse")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingartists/the-beatles | 7612f16f9c7ff044b345552ec76e6ba020a2b1ef | 2022-02-27T11:47:43.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/the-beatles",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/the-beatles | 13 | null | transformers | 10,179 | ---
language: en
datasets:
- huggingartists/the-beatles
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/c771d3ee1c0969503cdaf34edf76f38a.400x400x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Beatles</div>
<a href="https://genius.com/artists/the-beatles">
<div style="text-align: center; font-size: 14px;">@the-beatles</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from The Beatles.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-beatles).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-beatles")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2p2c5864/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Beatles's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/286vzjah/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/the-beatles')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-beatles")
model = AutoModelWithLMHead.from_pretrained("huggingartists/the-beatles")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/dril-hostagekiller-suicidepussy | db09a25ce9c1bce635f4fd5ef4371328c8fbaef3 | 2022-01-10T10:25:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/dril-hostagekiller-suicidepussy | 13 | null | transformers | 10,180 | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-hostagekiller-suicidepussy/1641810324627/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1473236995497500675/FtwXDZld_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1322637724470358022/ccOsLDPE_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">HUSSY2K. & wint & I have 400 diseases</div>
<div style="text-align: center; font-size: 14px;">@dril-hostagekiller-suicidepussy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from HUSSY2K. & wint & I have 400 diseases.
| Data | HUSSY2K. | wint | I have 400 diseases |
| --- | --- | --- | --- |
| Tweets downloaded | 3186 | 3226 | 3237 |
| Retweets | 819 | 480 | 121 |
| Short tweets | 395 | 304 | 1125 |
| Tweets kept | 1972 | 2442 | 1991 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bqo2ddu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-hostagekiller-suicidepussy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/o4ya0wuw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/o4ya0wuw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-hostagekiller-suicidepussy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/logo_daedalus | 28f16ea83fff3a2d9e25c8ad6175df48226913a4 | 2022-07-01T22:12:42.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/logo_daedalus | 13 | null | transformers | 10,181 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491246058206216192/qUZ_ddCV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">R.Сам 🦋🐏</div>
<div style="text-align: center; font-size: 14px;">@logo_daedalus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from R.Сам 🦋🐏.
| Data | R.Сам 🦋🐏 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 284 |
| Short tweets | 397 |
| Tweets kept | 2563 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mm5v8je/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @logo_daedalus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mr4fz6a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mr4fz6a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/logo_daedalus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
infinitejoy/wav2vec2-large-xls-r-300m-tatar | b51bf6f5dc695a6233f081dc329e7f071d4fe6ec | 2022-03-24T11:52:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tt",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | infinitejoy | null | infinitejoy/wav2vec2-large-xls-r-300m-tatar | 13 | null | transformers | 10,182 | ---
language:
- tt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- tt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Tatar
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tt
metrics:
- name: Test WER
type: wer
value: 24.392
- name: Test CER
type: cer
value: 5.024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tatar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Wer: 0.2454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.173 | 9.66 | 4000 | 0.2920 | 0.3608 |
| 0.9433 | 19.32 | 8000 | 0.2336 | 0.3026 |
| 0.8552 | 28.99 | 12000 | 0.2221 | 0.2799 |
| 0.7863 | 38.65 | 16000 | 0.1953 | 0.2479 |
| 0.7365 | 48.31 | 20000 | 0.1968 | 0.2449 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
it5/it5-large-headline-generation | 6c5a865e663942d85a8ac6843c56b3e6bae2233a | 2022-03-09T07:59:47.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"headline-generation",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/it5-large-headline-generation | 13 | null | transformers | 10,183 | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- headline-generation
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
model-index:
- name: it5-large-headline-generation
results:
- task:
type: headline-generation
name: "Headline generation"
dataset:
type: headgen_it
name: "HeadGen-IT"
metrics:
- type: rouge1
value: 0.308
name: "Test Rouge1"
- type: rouge2
value: 0.113
name: "Test Rouge2"
- type: rougeL
value: 0.270
name: "Test RougeL"
- type: bertscore
value: 0.430
name: "Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Large for News Headline Generation 📣 🇮🇹
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-large-headline-generation')
hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-headline-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-headline-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
it5/mt5-base-formal-to-informal | 3f009d964085cd9a54db70f743c40a161675201e | 2022-03-09T07:44:08.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | it5 | null | it5/mt5-base-formal-to-informal | 13 | null | transformers | 10,184 | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance è a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciò possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: mt5-base-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.653
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.449
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.632
name: "Avg. Test RougeL"
- type: bertscore
value: 0.667
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "40g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# mT5 Base for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/mt5-base-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint TBD},
url={TBD},
year={2022}
}
``` |
jonfd/electra-base-igc-is | e2921de06b441e2a3066da485d6fa31cf5c816a8 | 2022-01-05T14:54:23.000Z | [
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"transformers",
"license:cc-by-4.0"
]
| null | false | jonfd | null | jonfd/electra-base-igc-is | 13 | null | transformers | 10,185 | ---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ELECTRA-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. |
kaporter/bert-base-uncased-finetuned-squad | 3aa55a541df2d16870d3ca5074673d7f90cc008d | 2021-11-30T22:42:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | kaporter | null | kaporter/bert-base-uncased-finetuned-squad | 13 | null | transformers | 10,186 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model_index:
- name: bert-base-uncased-finetuned-squad
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: squad
type: squad
args: plain_text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0749 | 1.0 | 5533 | 1.0167 |
| 0.7851 | 2.0 | 11066 | 1.0299 |
| 0.6067 | 3.0 | 16599 | 1.0725 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.8.1
- Datasets 1.16.1
- Tokenizers 0.10.1
|
krevas/finance-electra-small-generator | b2132e311db62d3567cdb24e3b4822814f2d8ce5 | 2020-07-09T05:47:53.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | krevas | null | krevas/finance-electra-small-generator | 13 | null | transformers | 10,187 | Entry not found |
leonweber/PEDL | a02c96ba7996c1d89b796c251d6163e0aaad187f | 2021-06-16T09:19:35.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | leonweber | null | leonweber/PEDL | 13 | null | transformers | 10,188 | Entry not found |
lvwerra/gpt2-medium-taboo | acdbb5d8843d8fcc6373d8e0fefae0f77fb3fdc7 | 2021-05-23T08:40:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | lvwerra | null | lvwerra/gpt2-medium-taboo | 13 | null | transformers | 10,189 | # GPT-2 (medium) Taboo
## What is it?
A fine-tuned GPT-2 version for Taboo cards generation.
## Training setting
The model was trained on ~900 Taboo cards in the following format for 100 epochs:
```
Describe the word Glitch without using the words Problem, Unexpected, Technology, Minor, Outage.
````
|
madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1 | 23d3a42f78c42887cecbb3d1584299790bebdfa4 | 2021-06-16T17:10:27.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"license:mit",
"autotrain_compatible"
]
| question-answering | false | madlag | null | madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1 | 13 | null | transformers | 10,190 | ---
language: en
thumbnail:
license: mit
tags:
- question-answering
-
-
datasets:
- squad_v2
metrics:
- squad_v2
widget:
- text: "Where is the Eiffel Tower located?"
context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
- text: "Who is Frederic Chopin?"
context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano."
---
## bert-large-uncased-whole-word-masking model fine-tuned on SQuAD v2
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 25.0%** of the original weights.
The model contains **32.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **2.15x as fast as bert-large-uncased-whole-word-masking** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/density_info.js" id="d55f6096-07eb-4cc1-b284-90ec6ced516c"></script></div>
In terms of accuracy, its **F1 is 83.22**, compared with 85.85 for bert-large-uncased-whole-word-masking, a **F1 drop of 2.63**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-large-uncased-whole-word-masking) checkpoint on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2](https://huggingface.co/madlag/bert-large-uncased-whole-word-masking-finetuned-squadv2).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 155 heads were removed on a total of 384 (40.4%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1/raw/main/model_card/pruning_info.js" id="a474f11e-7e05-495e-bb21-4af0edfb6661"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD 2.0 | train | 130.0K |
| SQuAD 2.0 | eval | 11.9k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `1119MB` (original BERT: `1228.0MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **80.19** | **82.83** | **-3.64**|
| **F1** | **83.22** | **85.85** | **-2.63**|
```
{
"HasAns_exact": 76.48448043184885,
"HasAns_f1": 82.55514100819374,
"HasAns_total": 5928,
"NoAns_exact": 83.8856181665265,
"NoAns_f1": 83.8856181665265,
"NoAns_total": 5945,
"best_exact": 80.19034784805862,
"best_exact_thresh": 0.0,
"best_f1": 83.22133208932635,
"best_f1_thresh": 0.0,
"exact": 80.19034784805862,
"f1": 83.22133208932645,
"total": 11873
}
```
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1",
tokenizer="madlag/bert-large-uncased-wwm-squadv2-x2.15-f83.2-d25-hybrid-v1"
)
print("bert-large-uncased-whole-word-masking parameters: 497.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` |
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili | 5c566e84e39460721bf4085407c50e614ca09c0a | 2021-11-25T09:04:02.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili | 13 | null | transformers | 10,191 | ---
language:
- sw
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
---
# xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 |
| [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 |
| [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 |
| [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 |
| [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 |
| [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 |
| [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ."
ner_results = nlp(example)
print(ner_results)
```
|
mbeukman/xlm-roberta-base-finetuned-ner-luo | f7fea8aedb45790b6a89018658165aafb45d0b45 | 2021-11-25T09:04:35.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"transformers",
"NER",
"autotrain_compatible"
]
| token-classification | false | mbeukman | null | mbeukman/xlm-roberta-base-finetuned-ner-luo | 13 | null | transformers | 10,192 | ---
language:
- luo
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
widget:
- text: "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
---
# xlm-roberta-base-finetuned-ner-luo
This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Luo part.
More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer).
## About
This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages.
The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set).
This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021.
This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Contact & More information
For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository.
### Training Resources
In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1.
## Data
The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality.
The motivation for the use of this data is that it is the "first large, publicly available, high quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811).
## Intended Use
This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next.
## Limitations
This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer.
Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data).
As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often.
Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to.
### Privacy & Ethical Considerations
The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details.
No explicit ethical considerations or adjustments were made during fine-tuning of this model.
## Metrics
The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories.
These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise.
We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable.
The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes.
## Caveats and Recommendations
In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data.
## Model Structure
Here are some performance details on this specific model, compared to others we trained.
All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
| Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
| -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
| [xlm-roberta-base-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-luo) (This model) | [base](https://huggingface.co/xlm-roberta-base) | luo | 75.99 | 76.18 | 75.80 | 71.00 | 76.00 | 62.00 | 85.00 |
| [xlm-roberta-base-finetuned-luo-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-luo) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | luo | 78.71 | 78.91 | 78.52 | 72.00 | 84.00 | 59.00 | 87.00 |
| [xlm-roberta-base-finetuned-swahili-finetuned-ner-luo](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | luo | 78.13 | 77.75 | 78.52 | 65.00 | 82.00 | 61.00 | 89.00 |
## Usage
To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)):
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
model_name = 'mbeukman/xlm-roberta-base-finetuned-ner-luo'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e sub county ma Rachuonyo East e County ma Homa Bay ewii odhiambo makawuononi"
ner_results = nlp(example)
print(ner_results)
```
|
microsoft/wavlm-base-sv | 0a23162ffc49adcf42bdf836a00cb2eb45af3601 | 2022-03-25T12:05:52.000Z | [
"pytorch",
"wavlm",
"audio-xvector",
"en",
"arxiv:2110.13900",
"transformers",
"speech"
]
| null | false | microsoft | null | microsoft/wavlm-base-sv | 13 | null | transformers | 10,193 | ---
language:
- en
tags:
- speech
---
# WavLM-Base for Speaker Verification
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on 960h of [Librispeech](https://huggingface.co/datasets/librispeech_asr).
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Fine-tuning details
The model is fine-tuned on the [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) using an X-Vector head with an Additive Margin Softmax loss
[X-Vectors: Robust DNN Embeddings for Speaker Recognition](https://www.danielpovey.com/files/2018_icassp_xvectors.pdf)
# Usage
## Speaker Verification
```python
from transformers import Wav2Vec2FeatureExtractor, WavLMForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-sv')
model = WavLMForXVector.from_pretrained('microsoft/wavlm-base-sv')
# audio files are decoded on the fly
inputs = feature_extractor(dataset[:2]["audio"]["array"], return_tensors="pt")
embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.86 # the optimal threshold is dataset-dependent
if similarity < threshold:
print("Speakers are not the same!")
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 |
mishig/my-awesome-model | 22931baac60296ee00a8cd9d2a32b81a2dd95973 | 2021-08-25T10:28:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mishig | null | mishig/my-awesome-model | 13 | null | transformers | 10,194 | # Sentiment Classification by pretraining bert-base-cased
A test repo exploring HF Model Hub by following https://huggingface.co/transformers/model_sharing.html |
motiondew/set_date_1_bert-base-uncased_finetuned_with_haystack | 112229ba4ceeed57709eefb27823026442cf7529 | 2021-06-21T17:04:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | motiondew | null | motiondew/set_date_1_bert-base-uncased_finetuned_with_haystack | 13 | null | transformers | 10,195 | Entry not found |
mrm8488/bert-tiny-finetuned-fake-news-detection | 77911c5829206b123f51dbcfca5f663175315365 | 2021-10-15T16:00:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"transformers"
]
| text-classification | false | mrm8488 | null | mrm8488/bert-tiny-finetuned-fake-news-detection | 13 | null | transformers | 10,196 | ---
language: en
widget:
- text: "It s official the inmates are running the asylum A police department in Northampton, Massachusetts is ending its High-Five Friday program at local elementary schools due to concerns that undocumented children and others may feel uncomfortable seeing an officer at school.The program, started by the Northampton Police Department in December, had officers stand outside of a school each Friday morning to high-five students as they walked in to begin the day. WFBToday was High-5 Friday at Bridge St School! Thanks to everyone who participated! The kids and officers all had fun! #highfiveHere are a few tweets that were sent out by the NPD highlighting their high-five program with kids:Today was High-5 Friday at Bridge St School! Thanks to everyone who participated! The kids and officers all had fun! #highfive pic.twitter.com/Trz0yoW3Qh Northampton Police (@NorthamptonPD) December 9, 2016Today was High-Five Friday! Thanks to Jackson St School for hosting! We hope that everyone had a great time! Happy Friday!! #highfive pic.twitter.com/MWY6JBlHlK Northampton Police (@NorthamptonPD) January 6, 2017Here is part of their Facebook explanation for doing away with the high-five program:This is the same Northampton Police Department by the way, that celebrated the great turn-out for the nasty women march that was really about protesting Trump and defending abortion. Does it make you feel any safer when you see a police department bragging about their promotion of lawless liberal politics?"
---
# BERT Tiny fine-tuned for fake news detection |
mrm8488/dilstilgpt2-finetuned-amazon-food-reviews | e78b56780a5e49a26552239cd2d0511059de9dfa | 2021-05-23T10:18:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | mrm8488 | null | mrm8488/dilstilgpt2-finetuned-amazon-food-reviews | 13 | null | transformers | 10,197 | Entry not found |
neuralspace-reverie/indic-transformers-hi-roberta | bd8da7eb1560f26b91f7de06d3c687bded57ce16 | 2021-05-20T18:48:28.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"RoBERTa",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
]
| fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-hi-roberta | 13 | null | transformers | 10,198 | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- RoBERTa
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi RoBERTa
## Model description
This is a RoBERTa language model pre-trained on ~10 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-roberta')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-roberta')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 11, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
nihaldsouza1/yelp-rating-classification | 2360a7d5df325ba5a47033cb0807eb2550e72d23 | 2022-02-10T02:51:54.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:nihaldsouza1/autonlp-data-yelp-rating-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | nihaldsouza1 | null | nihaldsouza1/yelp-rating-classification | 13 | 1 | transformers | 10,199 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- nihaldsouza1/autonlp-data-yelp-rating-classification
co2_eq_emissions: 15.62335109262394
---
# Custom-trained user model
- Problem type: Multi-class Classification
- Model ID: 545015430
- CO2 Emissions (in grams): 15.62335109262394
## Validation Metrics
- Loss: 0.7870086431503296
- Accuracy: 0.6631428571428571
- Macro F1: 0.6613073053700258
- Micro F1: 0.6631428571428571
- Weighted F1: 0.661157273964887
- Macro Precision: 0.6626911151999393
- Micro Precision: 0.6631428571428571
- Weighted Precision: 0.662191421927851
- Macro Recall: 0.6629735627465572
- Micro Recall: 0.6631428571428571
- Weighted Recall: 0.6631428571428571
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/nihaldsouza1/autonlp-yelp-rating-classification-545015430
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nihaldsouza1/autonlp-yelp-rating-classification-545015430", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nihaldsouza1/autonlp-yelp-rating-classification-545015430", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.