modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rdenadai/BR_BERTo | 237d5664883c2e96ae07053f3cd1657beb03caca | 2021-05-20T19:53:44.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"pt",
"transformers",
"portuguese",
"brazil",
"pt_BR",
"autotrain_compatible"
] | fill-mask | false | rdenadai | null | rdenadai/BR_BERTo | 350 | 1 | transformers | 2,700 | ---
language: pt
tags:
- portuguese
- brazil
- pt_BR
widget:
- text: gostei muito dessa <mask>
---
# BR_BERTo
Portuguese (Brazil) model for text inference.
## Params
Trained on a corpus of 6_993_330 sentences.
- Vocab size: 150_000
- RobertaForMaskedLM size : 512
- Num train epochs: 3
- Time to train: ~10days (on GCP with a Nvidia T4)
I follow the great tutorial from HuggingFace team:
[How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train)
More infor here:
[BR_BERTo](https://github.com/rdenadai/BR-BERTo)
|
aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 | 63acc43bab8617ad96b6a9cc35760802ba495fa1 | 2022-03-30T01:41:47.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:cifar10",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | aaraki | null | aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 | 350 | null | transformers | 2,701 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9788
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.9788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ccdv/lsg-bart-base-16384-arxiv | 78a89c0598964f6397cf043db625c18f69d12882 | 2022-07-25T05:30:14.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:scientific_papers",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-16384-arxiv | 350 | null | transformers | 2,702 | ---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-16384-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384-arxiv", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384-arxiv", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-16384-arxiv
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096-arxiv](https://huggingface.co/ccdv/lsg-bart-base-4096-arxiv) on the [scientific_papers arxiv](https://huggingface.co/datasets/scientific_papers) dataset. \
The model is converted to handle 16384 long sequences and fine-tuned accordingly during 1 epoch. \
It achieves the following results on the test set:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 16384 | 64 | Full | 256 | 0 | 768 | 48.74 | 20.88 | 28.50 | 44.23 |
| 16384 | 1 | Full | 256 | 0 | 768 | 48.66 | 20.92 | 28.50 | 44.18 |
| 16384 | 64 | Global only | 256 | 0 | 768 | 48.08 | 20.42 | 28.00 | 43.65 |
| 16384 | 1 | None | 256 | 0 | 768 | 47.03 | 20.19 | 28.26 | 42.69 |
Reference model:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | 1 | - | 256 | 0 | 768 | 46.65 | 18.91 | 26.90 | 42.18 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from [ccdv/lsg-bart-base-4096-arxiv](https://huggingface.co/ccdv/lsg-bart-base-4096-arxiv), converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: arxiv
- eval_batch_size: 4
- eval_samples: 6440
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 320
- min_length: 32
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
google/long-t5-tglobal-xl | 801939bf36c52822f8f4dca7cb3b732ba2f70652 | 2022-06-22T09:05:18.000Z | [
"pytorch",
"jax",
"longt5",
"text2text-generation",
"en",
"arxiv:2112.07916",
"arxiv:1912.08777",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/long-t5-tglobal-xl | 350 | null | transformers | 2,703 | ---
license: apache-2.0
language: en
---
# LongT5 (transient-global attention, XL-sized model)
LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x).
Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence.
LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens).
## Intended uses & limitations
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
### How to use
```python
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-xl")
model = LongT5Model.from_pretrained("google/long-t5-tglobal-xl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{guo2021longt5,
title={LongT5: Efficient Text-To-Text Transformer for Long Sequences},
author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei},
journal={arXiv preprint arXiv:2112.07916},
year={2021}
}
``` |
JorisCos/DPTNet_Libri1Mix_enhsingle_16k | 935f441c53e44d40ca0cf138f71e850defc8bea5 | 2021-09-23T15:49:20.000Z | [
"pytorch",
"dataset:Libri1Mix",
"dataset:enh_single",
"asteroid",
"audio",
"DPTNet",
"audio-to-audio",
"license:cc-by-sa-4.0"
] | audio-to-audio | false | JorisCos | null | JorisCos/DPTNet_Libri1Mix_enhsingle_16k | 349 | null | asteroid | 2,704 | ---
tags:
- asteroid
- audio
- DPTNet
- audio-to-audio
datasets:
- Libri1Mix
- enh_single
license: cc-by-sa-4.0
---
## Asteroid model `JorisCos/DPTNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.829670037349064
si_sdr_imp: 11.379888731489366
sdr: 15.395712644737149
sdr_imp: 11.893049845524112
sir: Infinity
sir_imp: NaN
sar: 15.395712644737149
sar_imp: 11.893049845524112
stoi: 0.9301948391058859
stoi_imp: 0.13427501556534832
```
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino |
google/tapas-base-finetuned-sqa | 81916d20eef75766aeae71b9487fd615017b0413 | 2021-11-29T11:41:09.000Z | [
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:msr_sqa",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"license:apache-2.0"
] | table-question-answering | false | google | null | google/tapas-base-finetuned-sqa | 349 | null | transformers | 2,705 | ---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- msr_sqa
---
# TAPAS base model fine-tuned on Sequential Question Answering (SQA)
This model has 2 versions which can be used. The default version corresponds to the `tapas_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results on SQA - Dev Accuracy
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.7223 | [tapas-large-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/no_reset)
LARGE | reset | 0.7289 | [tapas-large-finetuned-sqa](https://huggingface.co/google/tapas-large-finetuned-sqa/tree/main)
**BASE** | **noreset** | **0.6737** | [tapas-base-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/no_reset)
**BASE** | **reset** | **0.6874** | [tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa/tree/main)
MEDIUM | noreset | 0.6464 | [tapas-medium-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/no_reset)
MEDIUM | reset | 0.6561 | [tapas-medium-finetuned-sqa](https://huggingface.co/google/tapas-medium-finetuned-sqa/tree/main)
SMALL | noreset | 0.5876 | [tapas-small-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/no_reset)
SMALL | reset | 0.6155 | [tapas-small-finetuned-sqa](https://huggingface.co/google/tapas-small-finetuned-sqa/tree/main)
MINI | noreset | 0.4574 | [tapas-mini-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/no_reset)
MINI | reset | 0.5148 | [tapas-mini-finetuned-sqa](https://huggingface.co/google/tapas-mini-finetuned-sqa/tree/main))
TINY | noreset | 0.2004 | [tapas-tiny-finetuned-sqa (absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/no_reset)
TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google/tapas-tiny-finetuned-sqa/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head on top of the pre-trained model, and then jointly
train this randomly initialized classification head with the base model on SQA.
## Intended uses & limitations
You can use this model for answering questions related to a table in a conversational set-up.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128.
In this setup, fine-tuning takes around 20 hours. The optimizer used is Adam with a learning rate of 1.25e-5, and a warmup ratio
of 0.2. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See also table 12 of the [original paper](https://arxiv.org/abs/2004.02349).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@InProceedings{iyyer2017search-based,
author = {Iyyer, Mohit and Yih, Scott Wen-tau and Chang, Ming-Wei},
title = {Search-based Neural Structured Learning for Sequential Question Answering},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
year = {2017},
month = {July},
abstract = {Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semi-structured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.},
publisher = {Association for Computational Linguistics},
url = {https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/},
}
``` |
uer/bart-large-chinese-cluecorpussmall | 8d01a28b6006982817bf35f3fe3f5c989ca0419e | 2022-07-15T08:17:29.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | uer | null | uer/bart-large-chinese-cluecorpussmall | 349 | null | transformers | 2,706 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "作为电子[MASK]的平台,京东绝对是领先者。如今的刘强[MASK]已经是身价过[MASK]的老板。"
---
# Chinese BART
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
You can download the set of Chinese BART models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **BART-Base** | [**L=6/H=768 (Base)**][base] |
| **BART-Large** | [**L=12/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of BART-Base):
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> model = BartForConditionalGeneration.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是[MASK]京", max_length=50, do_sample=False)
[{'generated_text': '中 国 的 首 都 是 北 京'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of BART-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor bart
```
```
python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bart/base_config.json \
--output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 5e-5 --batch_size 8 \
--span_masking --span_max_length 3
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path cluecorpussmall_bart_base_seq512_model.bin-1000000 \
--output_model_path pytorch_model.bin \
--layers_num 6
```
### BibTeX entry and citation info
```
@article{lewis2019bart,
title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1910.13461},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[base]:https://huggingface.co/uer/bart-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/bart-large-chinese-cluecorpussmall |
Robinsd/HarryBot4 | 5208e76c90a28b21aeaa9fe50d7033cbd9f8638f | 2022-05-17T08:13:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Robinsd | null | Robinsd/HarryBot4 | 349 | null | transformers | 2,707 | ---
tags:
- conversational
---
#harrypotter V2 |
AJ/rick-discord-bot | 31fec11b7ffa06a6398c78e5bf0a452efd2e8746 | 2021-09-27T01:03:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"humor"
] | conversational | false | AJ | null | AJ/rick-discord-bot | 348 | null | transformers | 2,708 | ---
tags:
- conversational
- humor
---
# its rick from rick and morty |
responsibility-framing/predict-perception-xlmr-cause-human | 5eefabc15e0fe6e87b32a980816cb05b05084a72 | 2022-03-15T22:58:24.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-cause-human | 348 | null | transformers | 2,709 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-human
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-human
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7632
- Rmse: 1.2675
- Rmse Cause::a Causata da un essere umano: 1.2675
- Mae: 0.9299
- Mae Cause::a Causata da un essere umano: 0.9299
- R2: 0.4188
- R2 Cause::a Causata da un essere umano: 0.4188
- Cos: 0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4082
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:-------:|:--------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0174 | 1.0 | 15 | 1.3796 | 1.7041 | 1.7041 | 1.3614 | 1.3614 | -0.0506 | -0.0506 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 0.9534 | 2.0 | 30 | 1.1173 | 1.5336 | 1.5336 | 1.2624 | 1.2624 | 0.1491 | 0.1491 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.8883 | 3.0 | 45 | 1.0580 | 1.4923 | 1.4923 | 1.2451 | 1.2451 | 0.1943 | 0.1943 | 0.5652 | 0.0 | 0.5 | 0.4957 | nan |
| 0.8215 | 4.0 | 60 | 1.0200 | 1.4653 | 1.4653 | 1.2087 | 1.2087 | 0.2232 | 0.2232 | 0.6522 | 0.0 | 0.5 | 0.5123 | nan |
| 0.744 | 5.0 | 75 | 1.1496 | 1.5556 | 1.5556 | 1.2573 | 1.2573 | 0.1245 | 0.1245 | 0.2174 | 0.0 | 0.5 | 0.3007 | nan |
| 0.7056 | 6.0 | 90 | 0.9641 | 1.4246 | 1.4246 | 1.1763 | 1.1763 | 0.2658 | 0.2658 | 0.4783 | 0.0 | 0.5 | 0.3619 | nan |
| 0.6136 | 7.0 | 105 | 0.8328 | 1.3240 | 1.3240 | 1.0948 | 1.0948 | 0.3658 | 0.3658 | 0.4783 | 0.0 | 0.5 | 0.3628 | nan |
| 0.5185 | 8.0 | 120 | 0.6890 | 1.2043 | 1.2043 | 1.0112 | 1.0112 | 0.4753 | 0.4753 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.5029 | 9.0 | 135 | 1.0380 | 1.4782 | 1.4782 | 1.1215 | 1.1215 | 0.2095 | 0.2095 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan |
| 0.4624 | 10.0 | 150 | 1.1780 | 1.5747 | 1.5747 | 1.2852 | 1.2852 | 0.1029 | 0.1029 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.4098 | 11.0 | 165 | 0.8714 | 1.3544 | 1.3544 | 1.1388 | 1.1388 | 0.3364 | 0.3364 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.348 | 12.0 | 180 | 0.7260 | 1.2362 | 1.2362 | 0.9563 | 0.9563 | 0.4471 | 0.4471 | 0.5652 | 0.0 | 0.5 | 0.4957 | nan |
| 0.3437 | 13.0 | 195 | 0.7241 | 1.2346 | 1.2346 | 0.8998 | 0.8998 | 0.4485 | 0.4485 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.2727 | 14.0 | 210 | 0.9070 | 1.3818 | 1.3818 | 1.1145 | 1.1145 | 0.3093 | 0.3093 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.2762 | 15.0 | 225 | 0.7280 | 1.2380 | 1.2380 | 0.9210 | 0.9210 | 0.4456 | 0.4456 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2396 | 16.0 | 240 | 0.7921 | 1.2912 | 1.2912 | 0.9738 | 0.9738 | 0.3968 | 0.3968 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1955 | 17.0 | 255 | 0.8368 | 1.3272 | 1.3272 | 0.9717 | 0.9717 | 0.3627 | 0.3627 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1928 | 18.0 | 270 | 0.7782 | 1.2799 | 1.2799 | 0.9615 | 0.9615 | 0.4073 | 0.4073 | 0.3043 | 0.0 | 0.5 | 0.3768 | nan |
| 0.1893 | 19.0 | 285 | 0.7594 | 1.2644 | 1.2644 | 0.9441 | 0.9441 | 0.4216 | 0.4216 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2111 | 20.0 | 300 | 0.7230 | 1.2336 | 1.2336 | 0.8953 | 0.8953 | 0.4494 | 0.4494 | 0.3913 | 0.0 | 0.5 | 0.3787 | nan |
| 0.193 | 21.0 | 315 | 0.7836 | 1.2843 | 1.2843 | 0.9577 | 0.9577 | 0.4033 | 0.4033 | 0.3043 | 0.0 | 0.5 | 0.3768 | nan |
| 0.1649 | 22.0 | 330 | 0.7248 | 1.2352 | 1.2352 | 0.9133 | 0.9133 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2182 | 23.0 | 345 | 0.7608 | 1.2655 | 1.2655 | 0.9435 | 0.9435 | 0.4206 | 0.4206 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1534 | 24.0 | 360 | 0.7447 | 1.2520 | 1.2520 | 0.9277 | 0.9277 | 0.4329 | 0.4329 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1362 | 25.0 | 375 | 0.7437 | 1.2512 | 1.2512 | 0.9236 | 0.9236 | 0.4336 | 0.4336 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1391 | 26.0 | 390 | 0.7301 | 1.2397 | 1.2397 | 0.9182 | 0.9182 | 0.4440 | 0.4440 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1679 | 27.0 | 405 | 0.7748 | 1.2770 | 1.2770 | 0.9619 | 0.9619 | 0.4100 | 0.4100 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1491 | 28.0 | 420 | 0.7415 | 1.2493 | 1.2493 | 0.9097 | 0.9097 | 0.4353 | 0.4353 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1559 | 29.0 | 435 | 0.7525 | 1.2586 | 1.2586 | 0.9189 | 0.9189 | 0.4269 | 0.4269 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1784 | 30.0 | 450 | 0.7632 | 1.2675 | 1.2675 | 0.9299 | 0.9299 | 0.4188 | 0.4188 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
NlpHUST/gpt2-vietnamese | 65818d14816b42be09e2201933bf07106d9a2647 | 2022-06-02T04:02:44.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"vi",
"dataset:oscar",
"transformers",
"vietnamese",
"lm",
"nlp"
] | text-generation | false | NlpHUST | null | NlpHUST/gpt2-vietnamese | 348 | null | transformers | 2,710 | ---
language: vi
tags:
- vi
- vietnamese
- gpt2
- text-generation
- lm
- nlp
datasets:
- oscar
widget:
- text: "Việt Nam là quốc gia có"
---
# GPT-2
Pretrained gpt model on Vietnamese language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
# How to use the model
~~~~
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('NlpHUST/gpt2-vietnamese')
model = GPT2LMHeadModel.from_pretrained('NlpHUST/gpt2-vietnamese')
text = "Việt Nam là quốc gia có"
input_ids = tokenizer.encode(text, return_tensors='pt')
max_length = 100
sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id,
do_sample=True,
max_length=max_length,
min_length=max_length,
top_k=40,
num_beams=5,
early_stopping=True,
no_repeat_ngram_size=2,
num_return_sequences=3)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
print('\n---')
~~~~
```bash
>> Generated text 1
Việt Nam là quốc gia có nền kinh tế hàng đầu thế giới về sản xuất, chế biến và tiêu thụ các sản phẩm nông sản, thủy sản. Tuy nhiên, trong những năm gần đây, nông nghiệp Việt Nam đang phải đối mặt với nhiều khó khăn, thách thức, đặc biệt là những tác động tiêu cực của biến đổi khí hậu.
Theo số liệu của Tổng cục Thống kê, tính đến cuối năm 2015, tổng diện tích gieo trồng, sản lượng lương thực, thực phẩm cả
---
>> Generated text 2
Việt Nam là quốc gia có nền kinh tế thị trường định hướng xã hội chủ nghĩa, có vai trò rất quan trọng đối với sự phát triển bền vững của đất nước. Do đó, trong quá trình đổi mới và hội nhập quốc tế, Việt Nam đã và đang phải đối mặt với không ít khó khăn, thách thức, đòi hỏi phải có những chủ trương, chính sách đúng đắn, kịp thời, phù hợp với tình hình thực tế. Để thực hiện thắng lợi mục tiêu, nhiệm vụ
---
>> Generated text 3
Việt Nam là quốc gia có nền kinh tế thị trường phát triển theo định hướng xã hội chủ nghĩa. Trong quá trình đổi mới và hội nhập quốc tế hiện nay, Việt Nam đang phải đối mặt với nhiều khó khăn, thách thức, đòi hỏi phải có những giải pháp đồng bộ, hiệu quả và phù hợp với tình hình thực tế của đất nước. Để thực hiện thắng lợi mục tiêu, nhiệm vụ mà Nghị quyết Đại hội XI của Đảng đề ra, Đảng và Nhà nước đã ban hành
---
```
# Model architecture
A 12-layer, 768-hidden-size transformer-based language model.
# Training
The model was trained on Vietnamese Oscar dataset (32 GB) to optimize a traditional language modelling objective on v3-8 TPU for around 6 days. It reaches around 13.4 perplexity on a chosen validation set from Oscar.
### GPT-2 Finetuning
The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2.
The script [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) .
```bash
python run_clm.py \
--model_name_or_path NlpHUST/gpt2-vietnamese \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
CenIA/distillbert-base-spanish-uncased | 8b0f77825ae49a0d099bf5e3aea8da71f6c0851f | 2022-04-28T19:56:51.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] | fill-mask | false | CenIA | null | CenIA/distillbert-base-spanish-uncased | 347 | 2 | transformers | 2,711 | ---
language:
- es
tags:
- distilbert
- spanish
- OpenCENIA
datasets:
- large_spanish_corpus
--- |
Mandy/DialoGPT-small-Mikasa | 787c864226cb0c2e212bbdd4ec97b526fd8342e6 | 2021-08-31T01:12:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Mandy | null | Mandy/DialoGPT-small-Mikasa | 347 | null | transformers | 2,712 | ---
tags:
- conversational
---
#Mikasa Ackermann DialoGPT Model |
binwang/bert-base-nli-stsb | 18cb07f9e817bfea4db656cb3a917e74523bc4ab | 2021-05-19T12:39:50.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | binwang | null | binwang/bert-base-nli-stsb | 347 | null | transformers | 2,713 | Entry not found |
yhavinga/gpt2-medium-dutch | f8678465e1ac9f48e45d7dd21711dd4620813550 | 2022-03-20T10:20:11.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"transformers",
"gpt2-medium"
] | text-generation | false | yhavinga | null | yhavinga/gpt2-medium-dutch | 347 | null | transformers | 2,714 | ---
language: nl
widget:
- text: "In het jaar 2030 zullen we"
- text: "Toen ik gisteren volledig in de ban was van"
- text: "Studenten en leraren van de Bogazici Universiteit in de Turkse stad Istanbul"
- text: "In Israël was een strenge lockdown"
tags:
- gpt2-medium
- gpt2
pipeline_tag: text-generation
datasets:
- yhavinga/mc4_nl_cleaned
---
# GPT2-Medium pre-trained on cleaned Dutch mC4 🇳🇱
A GPT2 medium-sized model (345M parameters) trained from scratch on Dutch, with perplexity 15.1 on cleaned Dutch mC4.
## How To Use
You can use this GPT2-model directly with a pipeline for text generation.
```python
MODEL_DIR='yhavinga/gpt2-medium-dutch'
from transformers import pipeline, GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained(MODEL_DIR)
model = GPT2LMHeadModel.from_pretrained(MODEL_DIR)
generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
generated_text = generator('In Antwerpen heeft zich gisteren', max_length=100, do_sample=True, top_k=40, top_p=0.95, repetition_penalty=2.0))
```
*"In Antwerpen heeft zich gisteren" - " een dramatische ontknoping voorgedaan in de Vlaamse deelregering. De VLD, die sinds afgelopen woensdag aan het bewind is in Vlaams-Waals gebied (de zogenaamde gewestelijke en niet rechtstreeks met Vlaanderen samenwerkende gewesten), krijgt toch geen meerderheidszetels bij verkiezingen voor gemeenteraadsverkiezingen in oktober of november volgend jaar in Westmalle, Berchem, Tervuren enz., aldus premier Jean-Pierre Van Cauwenberghe van Wallonië vandaag"*
## Tokenizer
* BPE tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface
Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
## Dataset
This model was trained on of the `full` configuration (33B tokens) of
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
## Models
TL;DR: [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) is the best model.
* The models with `a`/`b` in the step-column have been trained to step `a` of a total of `b` steps.
| | model | params | train seq len | ppl | loss | batch size | epochs | steps | optim | lr | duration | config |
|-----------------------------------------------------------------------------------|---------|--------|---------------|------|------|------------|--------|-----------------|-----------|--------|----------|-----------|
| [yhavinga/gpt-neo-125M-dutch](https://huggingface.co/yhavinga/gpt-neo-125M-dutch) | gpt neo | 125M | 512 | 20.9 | 3.04 | 128 | 1 | 190000/558608 | adam | 2.4e-3 | 1d 12h | full |
| [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) | gpt2 | 345M | 512 | 15.1 | 2.71 | 128 | 1 | 320000/520502 | adam | 8e-4 | 7d 2h | full |
| [yhavinga/gpt2-large-dutch](https://huggingface.co/yhavinga/gpt2-large-dutch) | gpt2 | 762M | 512 | 15.1 | 2.72 | 32 | 1 | 1100000/2082009 | adafactor | 3.3e-5 | 8d 15h | large |
| [yhavinga/gpt-neo-1.3B-dutch](https://huggingface.co/yhavinga/gpt-neo-1.3B-dutch) | gpt neo | 1.3B | 512 | 16.0 | 2.77 | 16 | 1 | 960000/3049896 | adafactor | 5e-4 | 7d 11h | full |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also
instrumental in most, if not all, parts of the training. The following repositories where helpful in setting up the TPU-VM,
and training the models:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
* [gpt2-medium-persian](https://huggingface.co/flax-community/gpt2-medium-persian)
* [gpt2-medium-indonesian](https://huggingface.co/flax-community/gpt2-medium-persian)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
Jordine/shitter | 7b554c7a103d591d08747e0b982fdca36cb02340 | 2022-07-26T13:22:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Jordine | null | Jordine/shitter | 347 | null | transformers | 2,715 | |
AmazonScience/qanlu | 3e7306005b52648b86a7cef39b87932736fc88e5 | 2021-09-30T17:23:27.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:atis",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | question-answering | false | AmazonScience | null | AmazonScience/qanlu | 346 | 3 | transformers | 2,716 | ---
language: en
license: cc-by-4.0
widget:
- context: "Yes. No. I'm looking for a cheap flight to Boston."
datasets:
- atis
---
# Question Answering NLU
Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering,
leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of
training an intent classifier or a slot tagger, for example, we can ask the model intent- and
slot-related questions in natural language:
```
Context : Yes. No. I'm looking for a cheap flight to Boston.
Question: Is the user looking to book a flight?
Answer : Yes
Question: Is the user asking about departure time?
Answer : No
Question: What price is the user looking for?
Answer : cheap
Question: Where is the user flying from?
Answer : (empty)
```
Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?").
Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: [Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf).
## Model training
Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the [Amazon Science repository](https://github.com/amazon-research/question-answering-nlu).
## Intended use and limitations
This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned
on relevant data.
## Use in transformers:
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True)
qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer)
qa_input = {
'context': 'Yes. No. I want a cheap flight to Boston.',
'question': 'What is the destination?'
}
answer = qa_pipeline(qa_input)
```
## Citation
If you use this work, please cite:
```
@inproceedings{namazifar2021language,
title={Language model is all you need: Natural language understanding as question answering},
author={Namazifar, Mahdi and Papangelis, Alexandros and Tur, Gokhan and Hakkani-T{\"u}r, Dilek},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7803--7807},
year={2021},
organization={IEEE}
}
```
## License
This library is licensed under the CC BY NC License. |
microsoft/CodeGPT-small-java | 3facf5bba3ca89e505937f8d014c0d90b6fc1dc4 | 2021-05-23T08:59:22.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | microsoft | null | microsoft/CodeGPT-small-java | 346 | 2 | transformers | 2,717 | Entry not found |
Theivaprakasham/layoutlmv2-finetuned-sroie_mod | 44b6e673c47fbe314af8d67707da37c1a6e49e78 | 2022-02-28T09:50:47.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Theivaprakasham | null | Theivaprakasham/layoutlmv2-finetuned-sroie_mod | 346 | null | transformers | 2,718 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-sroie_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-sroie_mod
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.0+cu101
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-blame-object | a3cf806e28fe0e73bdc9946c068fd0d8de57b8db | 2022-03-10T15:51:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-blame-object | 346 | null | transformers | 2,719 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-object
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5837
- Rmse: 0.5589
- Rmse Blame::a Un oggetto: 0.5589
- Mae: 0.3862
- Mae Blame::a Un oggetto: 0.3862
- R2: 0.2884
- R2 Blame::a Un oggetto: 0.2884
- Cos: 0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5024
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un oggetto | Mae | Mae Blame::a Un oggetto | R2 | R2 Blame::a Un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0603 | 1.0 | 15 | 0.8503 | 0.6745 | 0.6745 | 0.4386 | 0.4386 | -0.0365 | -0.0365 | 0.1304 | 0.0 | 0.5 | 0.5197 | nan |
| 0.9662 | 2.0 | 30 | 0.8510 | 0.6748 | 0.6748 | 0.4548 | 0.4548 | -0.0374 | -0.0374 | 0.0435 | 0.0 | 0.5 | 0.4840 | nan |
| 0.9438 | 3.0 | 45 | 0.7622 | 0.6386 | 0.6386 | 0.4541 | 0.4541 | 0.0709 | 0.0709 | 0.0435 | 0.0 | 0.5 | 0.4635 | nan |
| 0.9096 | 4.0 | 60 | 0.8301 | 0.6665 | 0.6665 | 0.4305 | 0.4305 | -0.0119 | -0.0119 | 0.0435 | 0.0 | 0.5 | 0.3499 | nan |
| 0.8383 | 5.0 | 75 | 0.7306 | 0.6252 | 0.6252 | 0.3814 | 0.3814 | 0.1094 | 0.1094 | 0.3043 | 0.0 | 0.5 | 0.5098 | nan |
| 0.7828 | 6.0 | 90 | 0.7434 | 0.6307 | 0.6307 | 0.4005 | 0.4005 | 0.0937 | 0.0937 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.7028 | 7.0 | 105 | 0.7218 | 0.6214 | 0.6214 | 0.4090 | 0.4090 | 0.1202 | 0.1202 | 0.3913 | 0.0 | 0.5 | 0.4470 | nan |
| 0.6661 | 8.0 | 120 | 0.7434 | 0.6307 | 0.6307 | 0.4042 | 0.4042 | 0.0938 | 0.0938 | 0.3913 | 0.0 | 0.5 | 0.4470 | nan |
| 0.578 | 9.0 | 135 | 0.7719 | 0.6426 | 0.6426 | 0.3975 | 0.3975 | 0.0591 | 0.0591 | 0.3913 | 0.0 | 0.5 | 0.4470 | nan |
| 0.544 | 10.0 | 150 | 0.7117 | 0.6171 | 0.6171 | 0.4126 | 0.4126 | 0.1324 | 0.1324 | 0.2174 | 0.0 | 0.5 | 0.3489 | nan |
| 0.4638 | 11.0 | 165 | 0.6683 | 0.5980 | 0.5980 | 0.3952 | 0.3952 | 0.1853 | 0.1853 | 0.3043 | 0.0 | 0.5 | 0.3989 | nan |
| 0.3998 | 12.0 | 180 | 0.6772 | 0.6019 | 0.6019 | 0.4201 | 0.4201 | 0.1745 | 0.1745 | 0.3043 | 0.0 | 0.5 | 0.3989 | nan |
| 0.3403 | 13.0 | 195 | 0.6576 | 0.5932 | 0.5932 | 0.4237 | 0.4237 | 0.1984 | 0.1984 | 0.2174 | 0.0 | 0.5 | 0.3491 | nan |
| 0.2839 | 14.0 | 210 | 0.6281 | 0.5797 | 0.5797 | 0.4208 | 0.4208 | 0.2344 | 0.2344 | 0.2174 | 0.0 | 0.5 | 0.3491 | nan |
| 0.2619 | 15.0 | 225 | 0.6254 | 0.5785 | 0.5785 | 0.3752 | 0.3752 | 0.2376 | 0.2376 | 0.3913 | 0.0 | 0.5 | 0.5756 | nan |
| 0.2175 | 16.0 | 240 | 0.6074 | 0.5701 | 0.5701 | 0.3985 | 0.3985 | 0.2596 | 0.2596 | 0.3043 | 0.0 | 0.5 | 0.4142 | nan |
| 0.1884 | 17.0 | 255 | 0.6045 | 0.5687 | 0.5687 | 0.4036 | 0.4036 | 0.2631 | 0.2631 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1797 | 18.0 | 270 | 0.6038 | 0.5684 | 0.5684 | 0.3914 | 0.3914 | 0.2640 | 0.2640 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1316 | 19.0 | 285 | 0.6199 | 0.5759 | 0.5759 | 0.4078 | 0.4078 | 0.2443 | 0.2443 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1429 | 20.0 | 300 | 0.6119 | 0.5722 | 0.5722 | 0.3954 | 0.3954 | 0.2540 | 0.2540 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1202 | 21.0 | 315 | 0.6193 | 0.5756 | 0.5756 | 0.3987 | 0.3987 | 0.2451 | 0.2451 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1159 | 22.0 | 330 | 0.6218 | 0.5768 | 0.5768 | 0.3995 | 0.3995 | 0.2420 | 0.2420 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.1027 | 23.0 | 345 | 0.6207 | 0.5763 | 0.5763 | 0.4100 | 0.4100 | 0.2433 | 0.2433 | 0.3043 | 0.0 | 0.5 | 0.4142 | nan |
| 0.1006 | 24.0 | 360 | 0.5646 | 0.5496 | 0.5496 | 0.3687 | 0.3687 | 0.3117 | 0.3117 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.0902 | 25.0 | 375 | 0.5582 | 0.5465 | 0.5465 | 0.3714 | 0.3714 | 0.3196 | 0.3196 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.0901 | 26.0 | 390 | 0.5650 | 0.5498 | 0.5498 | 0.3704 | 0.3704 | 0.3112 | 0.3112 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.0937 | 27.0 | 405 | 0.5713 | 0.5529 | 0.5529 | 0.3735 | 0.3735 | 0.3036 | 0.3036 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.0812 | 28.0 | 420 | 0.5773 | 0.5558 | 0.5558 | 0.3759 | 0.3759 | 0.2962 | 0.2962 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.0911 | 29.0 | 435 | 0.5818 | 0.5579 | 0.5579 | 0.3832 | 0.3832 | 0.2908 | 0.2908 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
| 0.082 | 30.0 | 450 | 0.5837 | 0.5589 | 0.5589 | 0.3862 | 0.3862 | 0.2884 | 0.2884 | 0.3913 | 0.0 | 0.5 | 0.5024 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hyunwoongko/asian-bart-ecjk | a9da2204e42df8afa450e8228255b1e109bc5c63 | 2021-04-01T07:36:52.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hyunwoongko | null | hyunwoongko/asian-bart-ecjk | 345 | null | transformers | 2,720 | Entry not found |
mukund/privbert | 48228b4661fa8252bdb39ca44a4d9758f6b37f88 | 2021-06-15T19:36:42.000Z | [
"pytorch",
"tf",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mukund | null | mukund/privbert | 345 | null | transformers | 2,721 | # PrivBERT
PrivBERT is a privacy policy language model. We pre-trained PrivBERT on ~1 million privacy policies starting with the pretrained Roberta model. The data is available at [https://privaseer.ist.psu.edu/data](https://privaseer.ist.psu.edu/data)
## Usage
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mukund/privbert")
model = AutoModel.from_pretrained("mukund/privbert")
```
## License
If you use this dataset in research, you must cite the below paper.
```
Mukund Srinath, Shomir Wilson and C. Lee Giles. Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies. In Proc. ACL 2021.
```
For research, teaching, and scholarship purposes, the model is available under a CC BY-NC-SA license. Please contact us for any requests regarding commercial use.
|
pie/example-ner-spanclf-conll03 | 6e76efe7940b9a25b5983611aff93675b520adec | 2022-01-02T10:13:27.000Z | [
"pytorch",
"TransformerSpanClassificationModel",
"transformers"
] | null | false | pie | null | pie/example-ner-spanclf-conll03 | 345 | null | transformers | 2,722 | Entry not found |
CianB/DialoGPT-small-Shrek2 | 1a1a1c7fa6b18a048129229aaea15ce1a99102d3 | 2021-08-26T21:13:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CianB | null | CianB/DialoGPT-small-Shrek2 | 344 | null | transformers | 2,723 | ---
tags:
- conversational
---
# Shrek DialoGPT model |
responsibility-framing/predict-perception-bert-cause-human | 8295e50cf36524154cbcce57edefe2d6e87ccd03 | 2022-03-10T16:01:42.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-cause-human | 344 | null | transformers | 2,724 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-human
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-human
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7139
- Rmse: 1.2259
- Rmse Cause::a Causata da un essere umano: 1.2259
- Mae: 1.0480
- Mae Cause::a Causata da un essere umano: 1.0480
- R2: 0.4563
- R2 Cause::a Causata da un essere umano: 0.4563
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3953
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:------:|:--------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0874 | 1.0 | 15 | 1.2615 | 1.6296 | 1.6296 | 1.3836 | 1.3836 | 0.0393 | 0.0393 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.9577 | 2.0 | 30 | 1.1988 | 1.5886 | 1.5886 | 1.3017 | 1.3017 | 0.0870 | 0.0870 | 0.4783 | 0.0 | 0.5 | 0.3944 | nan |
| 0.8414 | 3.0 | 45 | 0.9870 | 1.4414 | 1.4414 | 1.1963 | 1.1963 | 0.2483 | 0.2483 | 0.3913 | 0.0 | 0.5 | 0.3048 | nan |
| 0.7291 | 4.0 | 60 | 0.9098 | 1.3839 | 1.3839 | 1.1297 | 1.1297 | 0.3071 | 0.3071 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.5949 | 5.0 | 75 | 0.9207 | 1.3921 | 1.3921 | 1.2079 | 1.2079 | 0.2988 | 0.2988 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.4938 | 6.0 | 90 | 0.8591 | 1.3448 | 1.3448 | 1.1842 | 1.1842 | 0.3458 | 0.3458 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.3611 | 7.0 | 105 | 0.8176 | 1.3119 | 1.3119 | 1.1454 | 1.1454 | 0.3774 | 0.3774 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.2663 | 8.0 | 120 | 0.6879 | 1.2034 | 1.2034 | 1.0300 | 1.0300 | 0.4761 | 0.4761 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.1833 | 9.0 | 135 | 0.7704 | 1.2735 | 1.2735 | 1.1031 | 1.1031 | 0.4133 | 0.4133 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.1704 | 10.0 | 150 | 0.7097 | 1.2222 | 1.2222 | 1.0382 | 1.0382 | 0.4596 | 0.4596 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1219 | 11.0 | 165 | 0.6872 | 1.2027 | 1.2027 | 1.0198 | 1.0198 | 0.4767 | 0.4767 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1011 | 12.0 | 180 | 0.7201 | 1.2312 | 1.2312 | 1.0466 | 1.0466 | 0.4516 | 0.4516 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0849 | 13.0 | 195 | 0.7267 | 1.2368 | 1.2368 | 1.0454 | 1.0454 | 0.4466 | 0.4466 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0818 | 14.0 | 210 | 0.7361 | 1.2448 | 1.2448 | 1.0565 | 1.0565 | 0.4394 | 0.4394 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0634 | 15.0 | 225 | 0.7158 | 1.2275 | 1.2275 | 1.0384 | 1.0384 | 0.4549 | 0.4549 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.065 | 16.0 | 240 | 0.7394 | 1.2475 | 1.2475 | 1.0659 | 1.0659 | 0.4369 | 0.4369 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0541 | 17.0 | 255 | 0.7642 | 1.2683 | 1.2683 | 1.0496 | 1.0496 | 0.4181 | 0.4181 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0577 | 18.0 | 270 | 0.7137 | 1.2257 | 1.2257 | 1.0303 | 1.0303 | 0.4565 | 0.4565 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0474 | 19.0 | 285 | 0.7393 | 1.2475 | 1.2475 | 1.0447 | 1.0447 | 0.4370 | 0.4370 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0494 | 20.0 | 300 | 0.7157 | 1.2274 | 1.2274 | 1.0453 | 1.0453 | 0.4550 | 0.4550 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0434 | 21.0 | 315 | 0.7248 | 1.2352 | 1.2352 | 1.0462 | 1.0462 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.049 | 22.0 | 330 | 0.7384 | 1.2467 | 1.2467 | 1.0613 | 1.0613 | 0.4377 | 0.4377 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0405 | 23.0 | 345 | 0.7420 | 1.2498 | 1.2498 | 1.0653 | 1.0653 | 0.4349 | 0.4349 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0398 | 24.0 | 360 | 0.7355 | 1.2442 | 1.2442 | 1.0620 | 1.0620 | 0.4399 | 0.4399 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0398 | 25.0 | 375 | 0.7570 | 1.2623 | 1.2623 | 1.0698 | 1.0698 | 0.4235 | 0.4235 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0345 | 26.0 | 390 | 0.7359 | 1.2446 | 1.2446 | 1.0610 | 1.0610 | 0.4396 | 0.4396 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0345 | 27.0 | 405 | 0.7417 | 1.2495 | 1.2495 | 1.0660 | 1.0660 | 0.4352 | 0.4352 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0386 | 28.0 | 420 | 0.7215 | 1.2323 | 1.2323 | 1.0514 | 1.0514 | 0.4506 | 0.4506 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0372 | 29.0 | 435 | 0.7140 | 1.2260 | 1.2260 | 1.0477 | 1.0477 | 0.4562 | 0.4562 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.0407 | 30.0 | 450 | 0.7139 | 1.2259 | 1.2259 | 1.0480 | 1.0480 | 0.4563 | 0.4563 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-focus-concept | 95650d42f9092cd0427af7983158bdf5f9b26824 | 2022-03-10T16:23:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-focus-concept | 344 | null | transformers | 2,725 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-focus-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-focus-concept
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8129
- Rmse: 1.0197
- Rmse Focus::a Su un concetto astratto o un'emozione: 1.0197
- Mae: 0.7494
- Mae Focus::a Su un concetto astratto o un'emozione: 0.7494
- R2: 0.1970
- R2 Focus::a Su un concetto astratto o un'emozione: 0.1970
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4667
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un concetto astratto o un'emozione | Mae | Mae Focus::a Su un concetto astratto o un'emozione | R2 | R2 Focus::a Su un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------:|:------:|:--------------------------------------------------:|:-------:|:-------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.047 | 1.0 | 15 | 1.0199 | 1.1422 | 1.1422 | 0.9321 | 0.9321 | -0.0075 | -0.0075 | 0.1304 | 0.0 | 0.5 | 0.3199 | nan |
| 0.9914 | 2.0 | 30 | 0.9724 | 1.1153 | 1.1153 | 0.9407 | 0.9407 | 0.0393 | 0.0393 | 0.2174 | 0.0 | 0.5 | 0.3954 | nan |
| 0.9049 | 3.0 | 45 | 0.9406 | 1.0969 | 1.0969 | 0.9170 | 0.9170 | 0.0708 | 0.0708 | 0.2174 | 0.0 | 0.5 | 0.3632 | nan |
| 0.8826 | 4.0 | 60 | 0.8553 | 1.0460 | 1.0460 | 0.8570 | 0.8570 | 0.1551 | 0.1551 | 0.2174 | 0.0 | 0.5 | 0.3230 | nan |
| 0.7837 | 5.0 | 75 | 0.8324 | 1.0319 | 1.0319 | 0.8683 | 0.8683 | 0.1776 | 0.1776 | 0.2174 | 0.0 | 0.5 | 0.3419 | nan |
| 0.7013 | 6.0 | 90 | 0.7737 | 0.9949 | 0.9949 | 0.8150 | 0.8150 | 0.2356 | 0.2356 | 0.5652 | 0.0 | 0.5 | 0.5023 | nan |
| 0.6429 | 7.0 | 105 | 0.7832 | 1.0010 | 1.0010 | 0.8005 | 0.8005 | 0.2262 | 0.2262 | 0.3913 | 0.0 | 0.5 | 0.4446 | nan |
| 0.5526 | 8.0 | 120 | 0.7734 | 0.9946 | 0.9946 | 0.7704 | 0.7704 | 0.2360 | 0.2360 | 0.3043 | 0.0 | 0.5 | 0.2923 | nan |
| 0.5194 | 9.0 | 135 | 0.6624 | 0.9205 | 0.9205 | 0.7013 | 0.7013 | 0.3456 | 0.3456 | 0.3913 | 0.0 | 0.5 | 0.3523 | nan |
| 0.4278 | 10.0 | 150 | 0.8255 | 1.0276 | 1.0276 | 0.7351 | 0.7351 | 0.1845 | 0.1845 | 0.3043 | 0.0 | 0.5 | 0.4349 | nan |
| 0.3522 | 11.0 | 165 | 0.9340 | 1.0931 | 1.0931 | 0.8069 | 0.8069 | 0.0773 | 0.0773 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.314 | 12.0 | 180 | 0.7495 | 0.9792 | 0.9792 | 0.7254 | 0.7254 | 0.2596 | 0.2596 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.2665 | 13.0 | 195 | 0.8574 | 1.0473 | 1.0473 | 0.7678 | 0.7678 | 0.1530 | 0.1530 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.2348 | 14.0 | 210 | 0.7913 | 1.0061 | 1.0061 | 0.7218 | 0.7218 | 0.2183 | 0.2183 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.1859 | 15.0 | 225 | 0.8012 | 1.0124 | 1.0124 | 0.7162 | 0.7162 | 0.2085 | 0.2085 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.1373 | 16.0 | 240 | 0.8405 | 1.0369 | 1.0369 | 0.7318 | 0.7318 | 0.1697 | 0.1697 | 0.3043 | 0.0 | 0.5 | 0.3734 | nan |
| 0.1245 | 17.0 | 255 | 0.8398 | 1.0365 | 1.0365 | 0.7455 | 0.7455 | 0.1703 | 0.1703 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan |
| 0.1148 | 18.0 | 270 | 0.7948 | 1.0083 | 1.0083 | 0.7140 | 0.7140 | 0.2148 | 0.2148 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.1187 | 19.0 | 285 | 0.8301 | 1.0305 | 1.0305 | 0.7381 | 0.7381 | 0.1799 | 0.1799 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.1236 | 20.0 | 300 | 0.8867 | 1.0650 | 1.0650 | 0.7879 | 0.7879 | 0.1240 | 0.1240 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.1101 | 21.0 | 315 | 0.8405 | 1.0369 | 1.0369 | 0.7632 | 0.7632 | 0.1696 | 0.1696 | 0.3913 | 0.0 | 0.5 | 0.4059 | nan |
| 0.0902 | 22.0 | 330 | 0.7850 | 1.0021 | 1.0021 | 0.7173 | 0.7173 | 0.2245 | 0.2245 | 0.3043 | 0.0 | 0.5 | 0.3734 | nan |
| 0.093 | 23.0 | 345 | 0.7386 | 0.9720 | 0.9720 | 0.6960 | 0.6960 | 0.2704 | 0.2704 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.0846 | 24.0 | 360 | 0.7748 | 0.9956 | 0.9956 | 0.7150 | 0.7150 | 0.2345 | 0.2345 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.0826 | 25.0 | 375 | 0.7951 | 1.0085 | 1.0085 | 0.7230 | 0.7230 | 0.2145 | 0.2145 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.0749 | 26.0 | 390 | 0.8470 | 1.0409 | 1.0409 | 0.7621 | 0.7621 | 0.1633 | 0.1633 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan |
| 0.069 | 27.0 | 405 | 0.7968 | 1.0096 | 1.0096 | 0.7275 | 0.7275 | 0.2129 | 0.2129 | 0.3913 | 0.0 | 0.5 | 0.4175 | nan |
| 0.0775 | 28.0 | 420 | 0.8298 | 1.0303 | 1.0303 | 0.7589 | 0.7589 | 0.1802 | 0.1802 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan |
| 0.0783 | 29.0 | 435 | 0.8113 | 1.0188 | 1.0188 | 0.7469 | 0.7469 | 0.1985 | 0.1985 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan |
| 0.0773 | 30.0 | 450 | 0.8129 | 1.0197 | 1.0197 | 0.7494 | 0.7494 | 0.1970 | 0.1970 | 0.4783 | 0.0 | 0.5 | 0.4667 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Felipehonorato/storIA | 37e4997d0a6dbee5141e093243223d7e1ca54c5e | 2021-07-26T21:43:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Felipehonorato | null | Felipehonorato/storIA | 343 | null | transformers | 2,726 | Entry not found |
ainize/klue-bert-base-mrc | 497ad1a08619fb0d39b8d745115f705c9b503283 | 2021-11-16T01:38:03.000Z | [
"pytorch",
"bert",
"question-answering",
"ko",
"dataset:klue",
"transformers",
"mrc",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | ainize | null | ainize/klue-bert-base-mrc | 343 | 2 | transformers | 2,727 | ---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# bert-base for QA
**Code:** See [Ainize Workspace](https://link.ainize.ai/3FjvBVn)
**klue-bert-base-mrc DEMO**: [Ainize DEMO](https://main-klue-mrc-bert-scy6500.endpoint.ainize.ai/)
**klue-bert-base-mrc API**: [Ainize API](https://ainize.ai/scy6500/KLUE-MRC-BERT?branch=main)
## Overview
**Language model:** klue/bert-base
**Language:** Korean
**Downstream-task:** Extractive QA
**Training data:** KLUE-MRC
**Eval data:** KLUE-MRC
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-mrc")
model = AutoModelForQuestionAnswering.from_pretrained("ainize/klue-bert-base-mrc")
context = "your context"
question = "your question"
encodings = tokenizer(context, question, max_length=512, truncation=True,
padding="max_length", return_token_type_ids=False)
encodings = {key: torch.tensor([val]) for key, val in encodings.items()}
input_ids = encodings["input_ids"]
attention_mask = encodings["attention_mask"]
pred = model(input_ids, attention_mask=attention_mask)
start_logits, end_logits = pred.start_logits, pred.end_logits
token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1)
pred_ids = input_ids[0][token_start_index: token_end_index + 1]
prediction = tokenizer.decode(pred_ids)
```
## About us
[Teachable NLP](https://ainize.ai/teachable-nlp) - Train NLP models with your own text without writing any code
[Ainize](https://ainize.ai/) - Deploy ML project using free gpu
|
fran-martinez/scibert_scivocab_cased_ner_jnlpba | 1904782399ebd599671e5e654126deec44241f4a | 2021-05-19T16:56:50.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"scientific english",
"arxiv:1903.10676",
"transformers",
"autotrain_compatible"
] | token-classification | false | fran-martinez | null | fran-martinez/scibert_scivocab_cased_ner_jnlpba | 343 | null | transformers | 2,728 | ---
language: scientific english
---
# SciBERT finetuned on JNLPA for NER downstream task
## Language Model
[SciBERT](https://arxiv.org/pdf/1903.10676.pdf) is a pretrained language model based on BERT and trained by the
[Allen Institute for AI](https://allenai.org/) on papers from the corpus of
[Semantic Scholar](https://www.semanticscholar.org/).
Corpus size is 1.14M papers, 3.1B tokens. SciBERT has its own vocabulary (scivocab) that's built to best match
the training corpus.
## Downstream task
[`allenai/scibert_scivocab_cased`](https://huggingface.co/allenai/scibert_scivocab_cased#) has been finetuned for Named Entity
Recognition (NER) dowstream task. The code to train the NER can be found [here](https://github.com/fran-martinez/bio_ner_bert).
### Data
The corpus used to fine-tune the NER is [BioNLP / JNLPBA shared task](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004).
- Training data consist of 2,000 PubMed abstracts with term/word annotation. This corresponds to 18,546 samples (senteces).
- Evaluation data consist of 404 PubMed abstracts with term/word annotation. This corresponds to 3,856 samples (sentences).
The classes (at word level) and its distribution (number of examples for each class) for training and evaluation datasets are shown below:
| Class Label | # training examples| # evaluation examples|
|:--------------|--------------:|----------------:|
|O | 382,963 | 81,647 |
|B-protein | 30,269 | 5,067 |
|I-protein | 24,848 | 4,774 |
|B-cell_type | 6,718 | 1,921 |
|I-cell_type | 8,748 | 2,991 |
|B-DNA | 9,533 | 1,056 |
|I-DNA | 15,774 | 1,789 |
|B-cell_line | 3,830 | 500 |
|I-cell_line | 7,387 | 9,89 |
|B-RNA | 951 | 118 |
|I-RNA | 1,530 | 187 |
### Model
An exhaustive hyperparameter search was done.
The hyperparameters that provided the best results are:
- Max length sequence: 128
- Number of epochs: 6
- Batch size: 32
- Dropout: 0.3
- Optimizer: Adam
The used learning rate was 5e-5 with a decreasing linear schedule. A warmup was used at the beggining of the training
with a ratio of steps equal to 0.1 from the total training steps.
The model from the epoch with the best F1-score was selected, in this case, the model from epoch 5.
### Evaluation
The following table shows the evaluation metrics calculated at span/entity level:
| | precision| recall| f1-score|
|:---------|-----------:|---------:|---------:|
cell_line | 0.5205 | 0.7100 | 0.6007 |
cell_type | 0.7736 | 0.7422 | 0.7576 |
protein | 0.6953 | 0.8459 | 0.7633 |
DNA | 0.6997 | 0.7894 | 0.7419 |
RNA | 0.6985 | 0.8051 | 0.7480 |
| | | |
**micro avg** | 0.6984 | 0.8076 | 0.7490|
**macro avg** | 0.7032 | 0.8076 | 0.7498 |
The macro F1-score is equal to 0.7498, compared to the value provided by the Allen Institute for AI in their
[paper](https://arxiv.org/pdf/1903.10676.pdf), which is equal to 0.7728. This drop in performance could be due to
several reasons, but one hypothesis could be the fact that the authors used an additional conditional random field,
while this model uses a regular classification layer with softmax activation on top of SciBERT model.
At word level, this model achieves a precision of 0.7742, a recall of 0.8536 and a F1-score of 0.8093.
### Model usage in inference
Use the pipeline:
````python
from transformers import pipeline
text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
nlp_ner = pipeline("ner",
model='fran-martinez/scibert_scivocab_cased_ner_jnlpba',
tokenizer='fran-martinez/scibert_scivocab_cased_ner_jnlpba')
nlp_ner(text)
"""
Output:
---------------------------
[
{'word': 'glucocorticoid',
'score': 0.9894881248474121,
'entity': 'B-protein'},
{'word': 'receptor',
'score': 0.989505410194397,
'entity': 'I-protein'},
{'word': 'normal',
'score': 0.7680378556251526,
'entity': 'B-cell_type'},
{'word': 'cs',
'score': 0.5176806449890137,
'entity': 'I-cell_type'},
{'word': 'lymphocytes',
'score': 0.9898491501808167,
'entity': 'I-cell_type'}
]
"""
````
Or load model and tokenizer as follows:
````python
import torch
from transformers import AutoTokenizer, AutoModelForTokenClassification
# Example
text = "Mouse thymus was used as a source of glucocorticoid receptor from normal CS lymphocytes."
# Load model
tokenizer = AutoTokenizer.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
model = AutoModelForTokenClassification.from_pretrained("fran-martinez/scibert_scivocab_cased_ner_jnlpba")
# Get input for BERT
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
# Predict
with torch.no_grad():
outputs = model(input_ids)
# From the output let's take the first element of the tuple.
# Then, let's get rid of [CLS] and [SEP] tokens (first and last)
predictions = outputs[0].argmax(axis=-1)[0][1:-1]
# Map label class indexes to string labels.
for token, pred in zip(tokenizer.tokenize(text), predictions):
print(token, '->', model.config.id2label[pred.numpy().item()])
"""
Output:
---------------------------
mouse -> O
thymus -> O
was -> O
used -> O
as -> O
a -> O
source -> O
of -> O
glucocorticoid -> B-protein
receptor -> I-protein
from -> O
normal -> B-cell_type
cs -> I-cell_type
lymphocytes -> I-cell_type
. -> O
"""
````
|
Starry/HELLORUKAS | 727596aa2a695ede32aad385438ad2306b164ff3 | 2022-03-20T18:35:57.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Starry | null | Starry/HELLORUKAS | 343 | null | transformers | 2,729 | ---
tags:
- conversational
---
# DialoGPT model |
naver-clova-ix/donut-base-finetuned-cord-v2 | 4849e637cf6142b243c47a17d342387e90de82bc | 2022-07-19T02:45:59.000Z | [
"pytorch",
"donut",
"transformers",
"license:mit"
] | null | false | naver-clova-ix | null | naver-clova-ix/donut-base-finetuned-cord-v2 | 343 | 1 | transformers | 2,730 | ---
license: mit
---
|
CAMeL-Lab/bert-base-arabic-camelbert-da | 231698eab9ebf0ae7b518a64277b81b2fe829f2d | 2021-09-14T14:29:21.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-da | 342 | 5 | transformers | 2,731 | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
This model card describes **CAMeLBERT-DA** (`bert-base-arabic-camelbert-da`), a model pre-trained on the DA (dialectal Arabic) dataset.
||Model|Variant|Size|#Word|
|-|-|:-:|-:|-:|
||`bert-base-arabic-camelbert-mix`|CA,DA,MSA|167GB|17.3B|
||`bert-base-arabic-camelbert-ca`|CA|6GB|847M|
|✔|`bert-base-arabic-camelbert-da`|DA|54GB|5.8B|
||`bert-base-arabic-camelbert-msa`|MSA|107GB|12.6B|
||`bert-base-arabic-camelbert-msa-half`|MSA|53GB|6.3B|
||`bert-base-arabic-camelbert-msa-quarter`|MSA|27GB|3.1B|
||`bert-base-arabic-camelbert-msa-eighth`|MSA|14GB|1.6B|
||`bert-base-arabic-camelbert-msa-sixteenth`|MSA|6GB|746M|
## Intended uses
You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code [here](https://github.com/CAMeL-Lab/CAMeLBERT).
#### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='CAMeL-Lab/bert-base-arabic-camelbert-da')
>>> unmasker("الهدف من الحياة هو [MASK] .")
[{'sequence': '[CLS] الهدف من الحياة هو.. [SEP]',
'score': 0.062508225440979,
'token': 18,
'token_str': '.'},
{'sequence': '[CLS] الهدف من الحياة هو الموت. [SEP]',
'score': 0.033172328025102615,
'token': 4295,
'token_str': 'الموت'},
{'sequence': '[CLS] الهدف من الحياة هو الحياة. [SEP]',
'score': 0.029575437307357788,
'token': 3696,
'token_str': 'الحياة'},
{'sequence': '[CLS] الهدف من الحياة هو الرحيل. [SEP]',
'score': 0.02724040113389492,
'token': 11449,
'token_str': 'الرحيل'},
{'sequence': '[CLS] الهدف من الحياة هو الحب. [SEP]',
'score': 0.01564178802073002,
'token': 3088,
'token_str': 'الحب'}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = AutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
model = TFAutoModel.from_pretrained('CAMeL-Lab/bert-base-arabic-camelbert-da')
text = "مرحبا يا عالم."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
- DA (dialectal Arabic)
- A collection of dialectal Arabic data described in [our paper](https://arxiv.org/abs/2103.06678).
## Training procedure
We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified.
### Preprocessing
- After extracting the raw text from each corpus, we apply the following pre-processing.
- We first remove invalid characters and normalize white spaces using the utilities provided by [the original BERT implementation](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/tokenization.py#L286-L297).
- We also remove lines without any Arabic characters.
- We then remove diacritics and kashida using [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools).
- Finally, we split each line into sentences with a heuristics-based sentence segmenter.
- We train a WordPiece tokenizer on the entire dataset (167 GB text) with a vocabulary size of 30,000 using [HuggingFace's tokenizers](https://github.com/huggingface/tokenizers).
- We do not lowercase letters nor strip accents.
### Pre-training
- The model was trained on a single cloud TPU (`v3-8`) for one million steps in total.
- The first 90,000 steps were trained with a batch size of 1,024 and the rest was trained with a batch size of 256.
- The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%.
- We use whole word masking and a duplicate factor of 10.
- We set max predictions per sequence to 20 for the dataset with max sequence length of 128 tokens and 80 for the dataset with max sequence length of 512 tokens.
- We use a random seed of 12345, masked language model probability of 0.15, and short sequence probability of 0.1.
- The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
- We evaluate our pre-trained language models on five NLP tasks: NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
- We fine-tune and evaluate the models using 12 dataset.
- We used Hugging Face's transformers to fine-tune our CAMeLBERT models.
- We used transformers `v3.1.0` along with PyTorch `v1.5.1`.
- The fine-tuning was done by adding a fully connected linear layer to the last hidden state.
- We use \\(F_{1}\\) score as a metric for all tasks.
- Code used for fine-tuning is available [here](https://github.com/CAMeL-Lab/CAMeLBERT).
### Results
| Task | Dataset | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | --------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| NER | ANERcorp | MSA | 80.8% | 67.9% | 74.1% | 82.4% | 82.0% | 82.1% | 82.6% | 80.8% |
| POS | PATB (MSA) | MSA | 98.1% | 97.8% | 97.7% | 98.3% | 98.2% | 98.3% | 98.2% | 98.2% |
| | ARZTB (EGY) | DA | 93.6% | 92.3% | 92.7% | 93.6% | 93.6% | 93.7% | 93.6% | 93.6% |
| | Gumar (GLF) | DA | 97.3% | 97.7% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% | 97.9% |
| SA | ASTD | MSA | 76.3% | 69.4% | 74.6% | 76.9% | 76.0% | 76.8% | 76.7% | 75.3% |
| | ArSAS | MSA | 92.7% | 89.4% | 91.8% | 93.0% | 92.6% | 92.5% | 92.5% | 92.3% |
| | SemEval | MSA | 69.0% | 58.5% | 68.4% | 72.1% | 70.7% | 72.8% | 71.6% | 71.2% |
| DID | MADAR-26 | DA | 62.9% | 61.9% | 61.8% | 62.6% | 62.0% | 62.8% | 62.0% | 62.2% |
| | MADAR-6 | DA | 92.5% | 91.5% | 92.2% | 91.9% | 91.8% | 92.2% | 92.1% | 92.0% |
| | MADAR-Twitter-5 | MSA | 75.7% | 71.4% | 74.2% | 77.6% | 78.5% | 77.3% | 77.7% | 76.2% |
| | NADI | DA | 24.7% | 17.3% | 20.1% | 24.9% | 24.6% | 24.6% | 24.9% | 23.8% |
| Poetry | APCD | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
### Results (Average)
| | Variant | Mix | CA | DA | MSA | MSA-1/2 | MSA-1/4 | MSA-1/8 | MSA-1/16 |
| -------------------- | ------- | ----- | ----- | ----- | ----- | ------- | ------- | ------- | -------- |
| Variant-wise-average<sup>[[1]](#footnote-1)</sup> | MSA | 82.1% | 75.7% | 80.1% | 83.4% | 83.0% | 83.3% | 83.2% | 82.3% |
| | DA | 74.4% | 72.1% | 72.9% | 74.2% | 74.0% | 74.3% | 74.1% | 73.9% |
| | CA | 79.8% | 80.9% | 79.6% | 79.7% | 79.9% | 80.0% | 79.7% | 79.8% |
| Macro-Average | ALL | 78.7% | 74.7% | 77.1% | 79.2% | 79.0% | 79.2% | 79.1% | 78.6% |
<a name="footnote-1">[1]</a>: Variant-wise-average refers to average over a group of tasks in the same language variant.
## Acknowledgements
This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
EMBO/bio-lm | ad1b251544050545d0b91e294deed3b9ae97c189 | 2022-03-27T15:46:51.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"english",
"dataset:EMBO/biolang",
"transformers",
"language model",
"autotrain_compatible"
] | fill-mask | false | EMBO | null | EMBO/bio-lm | 342 | null | transformers | 2,732 | ---
language:
- english
thumbnail:
tags:
- language model
license:
datasets:
- EMBO/biolang
metrics:
-
---
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
|
MMG/xlm-roberta-large-ner-spanish | 340bd3924b6429c76354ada5c73517430a4184e1 | 2021-07-15T07:15:57.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"es",
"dataset:CoNLL-2002",
"transformers",
"autotrain_compatible"
] | token-classification | false | MMG | null | MMG/xlm-roberta-large-ner-spanish | 342 | 3 | transformers | 2,733 | ---
language:
- es
datasets:
- CoNLL-2002
widget:
- text: "Las oficinas de MMG están en Las Rozas."
---
# xlm-roberta-large-ner-spanish
This model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment. |
Sindhu/rembert-squad2 | 51a7532be77e3a279fc74e4cc891bac955ef1efc | 2022-01-30T18:35:08.000Z | [
"pytorch",
"rembert",
"question-answering",
"multilingual",
"dataset:squad2",
"transformers",
"autotrain_compatible"
] | question-answering | false | Sindhu | null | Sindhu/rembert-squad2 | 342 | 2 | transformers | 2,734 | ---
language:
- multilingual
tags:
- question-answering
datasets:
- squad2
metrics:
- squad2
---
# Rembert Squad2
This model is finetuned for QA task on Squad2 from [Rembert checkpoint](https://huggingface.co/google/rembert).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = "rembert"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Metrics generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 84.51107554956624,
"f1": 87.46644042781853,
"total": 11873,
"HasAns_exact": 80.97165991902834,
"HasAns_f1": 86.89086491219469,
"HasAns_total": 5928,
"NoAns_exact": 88.04037005887301,
"NoAns_f1": 88.04037005887301,
"NoAns_total": 5945
}
```
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man) |
philschmid/distilroberta-base-ner-conll2003 | f66a7144917dfade9e6f9c1f4b1f10f7aa26de83 | 2022-06-24T12:40:58.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | philschmid | null | philschmid/distilroberta-base-ner-conll2003 | 342 | 1 | transformers | 2,735 | ---
license: apache-2.0
tags:
- token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilroberta-base-ner-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
metrics:
- name: Precision
type: precision
value: 0.9492923423001218
- name: Recall
type: recall
value: 0.9565545901020023
- name: F1
type: f1
value: 0.9529096297690173
- name: Accuracy
type: accuracy
value: 0.9883096560400111
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9883249976987512
verified: true
- name: Precision
type: precision
value: 0.9906910190038265
verified: true
- name: Recall
type: recall
value: 0.9916635820847483
verified: true
- name: F1
type: f1
value: 0.9911770619696786
verified: true
- name: loss
type: loss
value: 0.05638007074594498
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-ner-conll2003
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset.
eval F1-Score: **95,29** (CoNLL-03)
test F1-Score: **90,74** (CoNLL-03)
eval F1-Score: **95,29** (CoNLL++ / CoNLL-03 corrected)
test F1-Score: **92,23** (CoNLL++ / CoNLL-03 corrected)
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-conll2003")
model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-conll2003")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9902376275441704e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
#### CoNNL2003
It achieves the following results on the evaluation set:
- Loss: 0.0583
- Precision: 0.9493
- Recall: 0.9566
- F1: 0.9529
- Accuracy: 0.9883
It achieves the following results on the test set:
- Loss: 0.2025
- Precision: 0.8999
- Recall: 0.915
- F1: 0.9074
- Accuracy: 0.9741
#### CoNNL++ / CoNLL2003 corrected
It achieves the following results on the evaluation set:
- Loss: 0.0567
- Precision: 0.9493
- Recall: 0.9566
- F1: 0.9529
- Accuracy: 0.9883
It achieves the following results on the test set:
- Loss: 0.1359
- Precision: 0.92
- Recall: 0.9245
- F1: 0.9223
- Accuracy: 0.9785
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
|
responsibility-framing/predict-perception-bert-cause-object | a29a2738f5496a707bd512450be76c34b76bead2 | 2022-03-10T16:04:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-cause-object | 342 | null | transformers | 2,736 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-object
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4120
- Rmse: 1.0345
- Rmse Cause::a Causata da un oggetto (es. una pistola): 1.0345
- Mae: 0.6181
- Mae Cause::a Causata da un oggetto (es. una pistola): 0.6181
- R2: 0.3837
- R2 Cause::a Causata da un oggetto (es. una pistola): 0.3837
- Cos: 0.9130
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.8986
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un oggetto (es. una pistola) | Mae | Mae Cause::a Causata da un oggetto (es. una pistola) | R2 | R2 Cause::a Causata da un oggetto (es. una pistola) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------:|:------:|:----------------------------------------------------:|:-------:|:---------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0824 | 1.0 | 15 | 0.6651 | 1.3143 | 1.3143 | 1.0930 | 1.0930 | 0.0052 | 0.0052 | 0.3043 | 0.0 | 0.5 | 0.4393 | nan |
| 0.9574 | 2.0 | 30 | 0.7088 | 1.3568 | 1.3568 | 1.1945 | 1.1945 | -0.0601 | -0.0601 | 0.0435 | 0.0 | 0.5 | 0.3380 | nan |
| 0.8151 | 3.0 | 45 | 0.6300 | 1.2791 | 1.2791 | 1.0206 | 1.0206 | 0.0577 | 0.0577 | 0.3043 | 0.0 | 0.5 | 0.3613 | nan |
| 0.6401 | 4.0 | 60 | 0.4871 | 1.1247 | 1.1247 | 0.7285 | 0.7285 | 0.2715 | 0.2715 | 0.5652 | 0.0 | 0.5 | 0.6424 | nan |
| 0.448 | 5.0 | 75 | 0.5005 | 1.1401 | 1.1401 | 0.7216 | 0.7216 | 0.2514 | 0.2514 | 0.4783 | 0.0 | 0.5 | 0.6077 | nan |
| 0.2893 | 6.0 | 90 | 0.4761 | 1.1119 | 1.1119 | 0.7237 | 0.7237 | 0.2879 | 0.2879 | 0.5652 | 0.0 | 0.5 | 0.6348 | nan |
| 0.174 | 7.0 | 105 | 0.4771 | 1.1131 | 1.1131 | 0.6836 | 0.6836 | 0.2865 | 0.2865 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.1383 | 8.0 | 120 | 0.4313 | 1.0583 | 1.0583 | 0.6462 | 0.6462 | 0.3550 | 0.3550 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.1105 | 9.0 | 135 | 0.4660 | 1.1001 | 1.1001 | 0.6737 | 0.6737 | 0.3030 | 0.3030 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0903 | 10.0 | 150 | 0.4866 | 1.1241 | 1.1241 | 0.7192 | 0.7192 | 0.2723 | 0.2723 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0571 | 11.0 | 165 | 0.4361 | 1.0642 | 1.0642 | 0.6130 | 0.6130 | 0.3478 | 0.3478 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0623 | 12.0 | 180 | 0.4578 | 1.0904 | 1.0904 | 0.6844 | 0.6844 | 0.3152 | 0.3152 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0526 | 13.0 | 195 | 0.4605 | 1.0936 | 1.0936 | 0.6697 | 0.6697 | 0.3112 | 0.3112 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0472 | 14.0 | 210 | 0.4440 | 1.0738 | 1.0738 | 0.6589 | 0.6589 | 0.3360 | 0.3360 | 0.7391 | 0.0 | 0.5 | 0.7327 | nan |
| 0.0492 | 15.0 | 225 | 0.4593 | 1.0922 | 1.0922 | 0.6812 | 0.6812 | 0.3130 | 0.3130 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0389 | 16.0 | 240 | 0.4195 | 1.0437 | 1.0437 | 0.6252 | 0.6252 | 0.3726 | 0.3726 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0396 | 17.0 | 255 | 0.4087 | 1.0302 | 1.0302 | 0.6119 | 0.6119 | 0.3888 | 0.3888 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0328 | 18.0 | 270 | 0.4274 | 1.0535 | 1.0535 | 0.6457 | 0.6457 | 0.3608 | 0.3608 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0345 | 19.0 | 285 | 0.4306 | 1.0574 | 1.0574 | 0.6576 | 0.6576 | 0.3560 | 0.3560 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0328 | 20.0 | 300 | 0.4067 | 1.0277 | 1.0277 | 0.6160 | 0.6160 | 0.3918 | 0.3918 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0344 | 21.0 | 315 | 0.4056 | 1.0263 | 1.0263 | 0.5948 | 0.5948 | 0.3934 | 0.3934 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0312 | 22.0 | 330 | 0.4236 | 1.0488 | 1.0488 | 0.6277 | 0.6277 | 0.3665 | 0.3665 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0241 | 23.0 | 345 | 0.4272 | 1.0533 | 1.0533 | 0.6444 | 0.6444 | 0.3610 | 0.3610 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0302 | 24.0 | 360 | 0.4046 | 1.0250 | 1.0250 | 0.6030 | 0.6030 | 0.3949 | 0.3949 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0244 | 25.0 | 375 | 0.4194 | 1.0436 | 1.0436 | 0.6320 | 0.6320 | 0.3728 | 0.3728 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0259 | 26.0 | 390 | 0.4025 | 1.0224 | 1.0224 | 0.6009 | 0.6009 | 0.3980 | 0.3980 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0265 | 27.0 | 405 | 0.4103 | 1.0323 | 1.0323 | 0.6180 | 0.6180 | 0.3863 | 0.3863 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0184 | 28.0 | 420 | 0.4059 | 1.0268 | 1.0268 | 0.6046 | 0.6046 | 0.3929 | 0.3929 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0257 | 29.0 | 435 | 0.4088 | 1.0304 | 1.0304 | 0.6122 | 0.6122 | 0.3885 | 0.3885 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0262 | 30.0 | 450 | 0.4120 | 1.0345 | 1.0345 | 0.6181 | 0.6181 | 0.3837 | 0.3837 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-cause-concept | 5805980e8d7423d75caa5b084ef47b806afc5047 | 2022-03-10T16:08:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-cause-concept | 342 | null | transformers | 2,737 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-concept
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4044
- Rmse: 0.6076
- Rmse Cause::a Causata da un concetto astratto (es. gelosia): 0.6076
- Mae: 0.4548
- Mae Cause::a Causata da un concetto astratto (es. gelosia): 0.4548
- R2: 0.5463
- R2 Cause::a Causata da un concetto astratto (es. gelosia): 0.5463
- Cos: 0.2174
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3931
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un concetto astratto (es. gelosia) | Mae | Mae Cause::a Causata da un concetto astratto (es. gelosia) | R2 | R2 Cause::a Causata da un concetto astratto (es. gelosia) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------:|:------:|:----------------------------------------------------------:|:-------:|:---------------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.08 | 1.0 | 15 | 0.9520 | 0.9323 | 0.9323 | 0.6560 | 0.6560 | -0.0680 | -0.0680 | 0.0435 | 0.0 | 0.5 | 0.3188 | nan |
| 0.9974 | 2.0 | 30 | 0.8621 | 0.8872 | 0.8872 | 0.5962 | 0.5962 | 0.0328 | 0.0328 | 0.1304 | 0.0 | 0.5 | 0.4066 | nan |
| 0.9337 | 3.0 | 45 | 0.9223 | 0.9176 | 0.9176 | 0.6608 | 0.6608 | -0.0347 | -0.0347 | 0.2174 | 0.0 | 0.5 | 0.3632 | nan |
| 0.966 | 4.0 | 60 | 0.8273 | 0.8691 | 0.8691 | 0.5874 | 0.5874 | 0.0719 | 0.0719 | 0.2174 | 0.0 | 0.5 | 0.3754 | nan |
| 0.8683 | 5.0 | 75 | 0.8741 | 0.8933 | 0.8933 | 0.6136 | 0.6136 | 0.0193 | 0.0193 | 0.2174 | 0.0 | 0.5 | 0.3529 | nan |
| 0.8522 | 6.0 | 90 | 0.7781 | 0.8428 | 0.8428 | 0.5732 | 0.5732 | 0.1271 | 0.1271 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan |
| 0.7968 | 7.0 | 105 | 0.7257 | 0.8139 | 0.8139 | 0.5519 | 0.5519 | 0.1859 | 0.1859 | 0.2174 | 0.0 | 0.5 | 0.4152 | nan |
| 0.7166 | 8.0 | 120 | 0.7122 | 0.8064 | 0.8064 | 0.5792 | 0.5792 | 0.2010 | 0.2010 | 0.1304 | 0.0 | 0.5 | 0.3955 | nan |
| 0.6246 | 9.0 | 135 | 0.6771 | 0.7862 | 0.7862 | 0.5701 | 0.5701 | 0.2403 | 0.2403 | 0.0435 | 0.0 | 0.5 | 0.3955 | nan |
| 0.5205 | 10.0 | 150 | 0.6704 | 0.7823 | 0.7823 | 0.5735 | 0.5735 | 0.2479 | 0.2479 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan |
| 0.4182 | 11.0 | 165 | 0.6852 | 0.7909 | 0.7909 | 0.5987 | 0.5987 | 0.2313 | 0.2313 | 0.3913 | 0.0 | 0.5 | 0.4847 | nan |
| 0.3984 | 12.0 | 180 | 0.6106 | 0.7466 | 0.7466 | 0.5696 | 0.5696 | 0.3150 | 0.3150 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.3138 | 13.0 | 195 | 0.5867 | 0.7318 | 0.7318 | 0.5209 | 0.5209 | 0.3418 | 0.3418 | 0.2174 | 0.0 | 0.5 | 0.3119 | nan |
| 0.2323 | 14.0 | 210 | 0.5120 | 0.6837 | 0.6837 | 0.5007 | 0.5007 | 0.4256 | 0.4256 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.2149 | 15.0 | 225 | 0.4789 | 0.6612 | 0.6612 | 0.4883 | 0.4883 | 0.4627 | 0.4627 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1753 | 16.0 | 240 | 0.4526 | 0.6428 | 0.6428 | 0.4775 | 0.4775 | 0.4922 | 0.4922 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1478 | 17.0 | 255 | 0.4383 | 0.6325 | 0.6325 | 0.4616 | 0.4616 | 0.5083 | 0.5083 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.1289 | 18.0 | 270 | 0.4141 | 0.6148 | 0.6148 | 0.4478 | 0.4478 | 0.5355 | 0.5355 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1035 | 19.0 | 285 | 0.3952 | 0.6007 | 0.6007 | 0.4407 | 0.4407 | 0.5566 | 0.5566 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.1087 | 20.0 | 300 | 0.4217 | 0.6205 | 0.6205 | 0.4505 | 0.4505 | 0.5269 | 0.5269 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.1005 | 21.0 | 315 | 0.4065 | 0.6091 | 0.6091 | 0.4508 | 0.4508 | 0.5440 | 0.5440 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0868 | 22.0 | 330 | 0.3937 | 0.5995 | 0.5995 | 0.4470 | 0.4470 | 0.5584 | 0.5584 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.0808 | 23.0 | 345 | 0.4132 | 0.6142 | 0.6142 | 0.4617 | 0.4617 | 0.5364 | 0.5364 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0737 | 24.0 | 360 | 0.4214 | 0.6203 | 0.6203 | 0.4659 | 0.4659 | 0.5272 | 0.5272 | 0.3043 | 0.0 | 0.5 | 0.4066 | nan |
| 0.0711 | 25.0 | 375 | 0.3863 | 0.5939 | 0.5939 | 0.4470 | 0.4470 | 0.5666 | 0.5666 | 0.3043 | 0.0 | 0.5 | 0.3849 | nan |
| 0.066 | 26.0 | 390 | 0.4353 | 0.6304 | 0.6304 | 0.4760 | 0.4760 | 0.5117 | 0.5117 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0681 | 27.0 | 405 | 0.4078 | 0.6101 | 0.6101 | 0.4612 | 0.4612 | 0.5426 | 0.5426 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0543 | 28.0 | 420 | 0.4118 | 0.6132 | 0.6132 | 0.4616 | 0.4616 | 0.5380 | 0.5380 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.069 | 29.0 | 435 | 0.4041 | 0.6074 | 0.6074 | 0.4551 | 0.4551 | 0.5466 | 0.5466 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
| 0.0604 | 30.0 | 450 | 0.4044 | 0.6076 | 0.6076 | 0.4548 | 0.4548 | 0.5463 | 0.5463 | 0.2174 | 0.0 | 0.5 | 0.3931 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-none | 0b7dbf7921070f0c3046fb444b46bdcbd7d1ee6c | 2022-03-15T22:52:50.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-blame-none | 342 | null | transformers | 2,738 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-none
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8941
- Rmse: 1.1259
- Rmse Blame::a Nessuno: 1.1259
- Mae: 0.8559
- Mae Blame::a Nessuno: 0.8559
- R2: 0.2847
- R2 Blame::a Nessuno: 0.2847
- Cos: 0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3537
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Nessuno | Mae | Mae Blame::a Nessuno | R2 | R2 Blame::a Nessuno | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------:|:------:|:--------------------:|:-------:|:-------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.042 | 1.0 | 15 | 1.2746 | 1.3443 | 1.3443 | 1.1788 | 1.1788 | -0.0197 | -0.0197 | 0.0435 | 0.0 | 0.5 | 0.2970 | nan |
| 0.9994 | 2.0 | 30 | 1.3264 | 1.3714 | 1.3714 | 1.1967 | 1.1967 | -0.0612 | -0.0612 | -0.0435 | 0.0 | 0.5 | 0.2961 | nan |
| 0.9123 | 3.0 | 45 | 1.2511 | 1.3319 | 1.3319 | 1.0932 | 1.0932 | -0.0009 | -0.0009 | 0.1304 | 0.0 | 0.5 | 0.2681 | nan |
| 0.741 | 4.0 | 60 | 1.0204 | 1.2028 | 1.2028 | 0.9818 | 0.9818 | 0.1836 | 0.1836 | 0.3043 | 0.0 | 0.5 | 0.3686 | nan |
| 0.6337 | 5.0 | 75 | 0.8607 | 1.1047 | 1.1047 | 0.8145 | 0.8145 | 0.3115 | 0.3115 | 0.3913 | 0.0 | 0.5 | 0.4044 | nan |
| 0.4974 | 6.0 | 90 | 0.8574 | 1.1026 | 1.1026 | 0.8095 | 0.8095 | 0.3140 | 0.3140 | 0.3913 | 0.0 | 0.5 | 0.4044 | nan |
| 0.4929 | 7.0 | 105 | 0.8548 | 1.1009 | 1.1009 | 0.8560 | 0.8560 | 0.3161 | 0.3161 | 0.3043 | 0.0 | 0.5 | 0.3686 | nan |
| 0.4378 | 8.0 | 120 | 0.6974 | 0.9944 | 0.9944 | 0.7503 | 0.7503 | 0.4421 | 0.4421 | 0.3043 | 0.0 | 0.5 | 0.3686 | nan |
| 0.3999 | 9.0 | 135 | 0.7955 | 1.0620 | 1.0620 | 0.7907 | 0.7907 | 0.3636 | 0.3636 | 0.3913 | 0.0 | 0.5 | 0.4044 | nan |
| 0.3715 | 10.0 | 150 | 0.8954 | 1.1267 | 1.1267 | 0.8036 | 0.8036 | 0.2837 | 0.2837 | 0.4783 | 0.0 | 0.5 | 0.4058 | nan |
| 0.3551 | 11.0 | 165 | 0.8449 | 1.0945 | 1.0945 | 0.8748 | 0.8748 | 0.3241 | 0.3241 | 0.3913 | 0.0 | 0.5 | 0.3931 | nan |
| 0.3428 | 12.0 | 180 | 0.7960 | 1.0624 | 1.0624 | 0.8000 | 0.8000 | 0.3632 | 0.3632 | 0.3913 | 0.0 | 0.5 | 0.4044 | nan |
| 0.2923 | 13.0 | 195 | 0.9027 | 1.1313 | 1.1313 | 0.8441 | 0.8441 | 0.2778 | 0.2778 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.2236 | 14.0 | 210 | 0.8914 | 1.1242 | 1.1242 | 0.8998 | 0.8998 | 0.2869 | 0.2869 | 0.2174 | 0.0 | 0.5 | 0.3324 | nan |
| 0.2553 | 15.0 | 225 | 0.9184 | 1.1411 | 1.1411 | 0.8633 | 0.8633 | 0.2652 | 0.2652 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.2064 | 16.0 | 240 | 0.9284 | 1.1473 | 1.1473 | 0.8919 | 0.8919 | 0.2573 | 0.2573 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1972 | 17.0 | 255 | 0.9495 | 1.1602 | 1.1602 | 0.8768 | 0.8768 | 0.2404 | 0.2404 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1622 | 18.0 | 270 | 0.9850 | 1.1818 | 1.1818 | 0.9303 | 0.9303 | 0.2120 | 0.2120 | 0.2174 | 0.0 | 0.5 | 0.3324 | nan |
| 0.1685 | 19.0 | 285 | 0.9603 | 1.1669 | 1.1669 | 0.8679 | 0.8679 | 0.2317 | 0.2317 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1773 | 20.0 | 300 | 0.9269 | 1.1464 | 1.1464 | 0.8391 | 0.8391 | 0.2585 | 0.2585 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1716 | 21.0 | 315 | 0.8936 | 1.1256 | 1.1256 | 0.8357 | 0.8357 | 0.2851 | 0.2851 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.161 | 22.0 | 330 | 0.8894 | 1.1230 | 1.1230 | 0.8593 | 0.8593 | 0.2884 | 0.2884 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1297 | 23.0 | 345 | 0.8997 | 1.1294 | 1.1294 | 0.8568 | 0.8568 | 0.2802 | 0.2802 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.15 | 24.0 | 360 | 0.8748 | 1.1137 | 1.1137 | 0.8541 | 0.8541 | 0.3002 | 0.3002 | 0.2174 | 0.0 | 0.5 | 0.3324 | nan |
| 0.1149 | 25.0 | 375 | 0.9264 | 1.1461 | 1.1461 | 0.8682 | 0.8682 | 0.2588 | 0.2588 | 0.3913 | 0.0 | 0.5 | 0.3901 | nan |
| 0.1354 | 26.0 | 390 | 0.8829 | 1.1188 | 1.1188 | 0.8608 | 0.8608 | 0.2937 | 0.2937 | 0.2174 | 0.0 | 0.5 | 0.3324 | nan |
| 0.1321 | 27.0 | 405 | 0.9137 | 1.1382 | 1.1382 | 0.8656 | 0.8656 | 0.2691 | 0.2691 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1154 | 28.0 | 420 | 0.8774 | 1.1154 | 1.1154 | 0.8488 | 0.8488 | 0.2980 | 0.2980 | 0.2174 | 0.0 | 0.5 | 0.3324 | nan |
| 0.1112 | 29.0 | 435 | 0.8985 | 1.1287 | 1.1287 | 0.8562 | 0.8562 | 0.2812 | 0.2812 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
| 0.1525 | 30.0 | 450 | 0.8941 | 1.1259 | 1.1259 | 0.8559 | 0.8559 | 0.2847 | 0.2847 | 0.3043 | 0.0 | 0.5 | 0.3537 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
NovelAI/genji-jp | 57d1fd45064798dd38faa9c6cf119f1a040f9526 | 2021-11-08T01:01:27.000Z | [
"pytorch",
"gptj",
"text-generation",
"jp",
"en",
"arxiv:2104.09864",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | NovelAI | null | NovelAI/genji-jp | 341 | 3 | transformers | 2,739 | ---
language:
- jp
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
---
# Genji-JP 6B
Please check our blog post for more details, samples, evaluations and more:
[Blogpost](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a)
## Model Description
Genji-JP 6B is a model finetuned on our Japanese storytelling dataset based on EleutherAI's GPT-J 6B model. This particular model is trained on Japanese web novels.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on our Japanese storytelling dataset. Check our blog post for more details.
### How to use
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-jp", torch_dtype=torch.float16, low_cpu_mem_usage=True).eval().cuda()
text = '''あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう!
***
転生すると、ある能力を手に入れていた。それは、'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, temperature=1, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0]
generated_text = tokenizer.decode(last_tokens).replace("�", "")
print("Generation:\n" + generated_text)
```
When run, produces output like this:
```
Generation:
あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう!
***
転生すると、ある能力を手に入れていた。それは、『予知』だ。過去から未来のことを、誰も知らない出来事も含めて見通すことが出来る。
悪魔の欠片と呼ばれる小さな結晶を取り込んで、使役することが出来る。人を惹きつけ、堕落させる。何より、俺は男なんて居なかったし、女に興味もない。……そんなクズの片棒を担ぎ上げる奴が多くなると思うと、ちょっと苦しい。
だが、一部の人間には協力者を得ることが出来る。目立たない街にある寺の中で、常に家に引きこもっている老人。そんなヤツの魂をコントロールすることが出来るのだ。便利な能力だ。しかし、裏切り者は大勢いる。気を抜けば、狂う。だから注意が必要だ。
――「やってやるよ」
アーロンは不敵に笑った。この
```
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/)
Thanks [EleutherAI](https://eleuther.ai/) for pretraining the GPT-J 6B model.
Thanks to everyone who contributed to this project!
- [Finetune](https://github.com/finetuneanon)
- [Aero](https://github.com/AeroScripts)
- [Kurumuz](https://github.com/kurumuz) |
superb/hubert-base-superb-ks | d7e0efe9c25fe2e695402102e2fd7c77b00206f5 | 2021-11-04T16:03:26.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/hubert-base-superb-ks | 341 | 1 | transformers | 2,740 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
license: apache-2.0
widget:
- example_title: Speech Commands "down"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_down.wav
- example_title: Speech Commands "go"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_go.wav
---
# Hubert-Base for Keyword Spotting
## Model description
This is a ported version of [S3PRL's Hubert for the SUPERB Keyword Spotting task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/speech_commands).
The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
inference time are all crucial. SUPERB uses the widely used
[Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task.
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
false positive.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ks-keyword-spotting).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-base-superb-ks")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
from torchaudio.sox_effects import apply_effects_file
effects = [["channels", "1"], ["rate", "16000"], ["gain", "-3.0"]]
def map_to_array(example):
speech, _ = apply_effects_file(example["file"], effects)
example["speech"] = speech.squeeze(0).numpy()
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ks")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-ks")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9630` | `0.9672` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
responsibility-framing/predict-perception-bert-blame-none | 9010ca86bff987e01ba6e9545b1cb496556b9339 | 2022-03-10T15:59:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-blame-none | 341 | null | transformers | 2,741 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-none
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8646
- Rmse: 1.1072
- Rmse Blame::a Nessuno: 1.1072
- Mae: 0.8721
- Mae Blame::a Nessuno: 0.8721
- R2: 0.3083
- R2 Blame::a Nessuno: 0.3083
- Cos: 0.5652
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5070
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Nessuno | Mae | Mae Blame::a Nessuno | R2 | R2 Blame::a Nessuno | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------:|:------:|:--------------------:|:-------:|:-------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.007 | 1.0 | 15 | 1.2585 | 1.3358 | 1.3358 | 1.1752 | 1.1752 | -0.0068 | -0.0068 | -0.0435 | 0.0 | 0.5 | 0.2970 | nan |
| 0.927 | 2.0 | 30 | 1.1310 | 1.2663 | 1.2663 | 1.0633 | 1.0633 | 0.0952 | 0.0952 | 0.4783 | 0.0 | 0.5 | 0.4012 | nan |
| 0.8376 | 3.0 | 45 | 1.0603 | 1.2261 | 1.2261 | 1.0574 | 1.0574 | 0.1518 | 0.1518 | 0.1304 | 0.0 | 0.5 | 0.2970 | nan |
| 0.7154 | 4.0 | 60 | 0.8347 | 1.0879 | 1.0879 | 0.8854 | 0.8854 | 0.3323 | 0.3323 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.5766 | 5.0 | 75 | 0.7426 | 1.0261 | 1.0261 | 0.8340 | 0.8340 | 0.4059 | 0.4059 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.4632 | 6.0 | 90 | 0.6671 | 0.9725 | 0.9725 | 0.7932 | 0.7932 | 0.4663 | 0.4663 | 0.6522 | 0.0 | 0.5 | 0.5209 | nan |
| 0.3854 | 7.0 | 105 | 0.6447 | 0.9561 | 0.9561 | 0.7424 | 0.7424 | 0.4842 | 0.4842 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan |
| 0.3154 | 8.0 | 120 | 0.7198 | 1.0102 | 1.0102 | 0.8113 | 0.8113 | 0.4241 | 0.4241 | 0.6522 | 0.0 | 0.5 | 0.4307 | nan |
| 0.2637 | 9.0 | 135 | 0.7221 | 1.0118 | 1.0118 | 0.8319 | 0.8319 | 0.4223 | 0.4223 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan |
| 0.1962 | 10.0 | 150 | 0.6999 | 0.9962 | 0.9962 | 0.7945 | 0.7945 | 0.4401 | 0.4401 | 0.4783 | 0.0 | 0.5 | 0.4056 | nan |
| 0.1784 | 11.0 | 165 | 0.7335 | 1.0198 | 1.0198 | 0.7969 | 0.7969 | 0.4132 | 0.4132 | 0.5652 | 0.0 | 0.5 | 0.4150 | nan |
| 0.1531 | 12.0 | 180 | 0.8277 | 1.0833 | 1.0833 | 0.8839 | 0.8839 | 0.3378 | 0.3378 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.1425 | 13.0 | 195 | 0.8644 | 1.1070 | 1.1070 | 0.8726 | 0.8726 | 0.3085 | 0.3085 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0921 | 14.0 | 210 | 0.8874 | 1.1217 | 1.1217 | 0.9024 | 0.9024 | 0.2900 | 0.2900 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0913 | 15.0 | 225 | 0.8663 | 1.1083 | 1.1083 | 0.8914 | 0.8914 | 0.3070 | 0.3070 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.08 | 16.0 | 240 | 0.8678 | 1.1093 | 1.1093 | 0.8762 | 0.8762 | 0.3057 | 0.3057 | 0.6522 | 0.0 | 0.5 | 0.5931 | nan |
| 0.0725 | 17.0 | 255 | 0.8497 | 1.0976 | 1.0976 | 0.8868 | 0.8868 | 0.3202 | 0.3202 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0696 | 18.0 | 270 | 0.8533 | 1.1000 | 1.1000 | 0.8796 | 0.8796 | 0.3173 | 0.3173 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0632 | 19.0 | 285 | 0.8563 | 1.1018 | 1.1018 | 0.8768 | 0.8768 | 0.3150 | 0.3150 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0511 | 20.0 | 300 | 0.8433 | 1.0935 | 1.0935 | 0.8684 | 0.8684 | 0.3254 | 0.3254 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0517 | 21.0 | 315 | 0.8449 | 1.0945 | 1.0945 | 0.8758 | 0.8758 | 0.3240 | 0.3240 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0556 | 22.0 | 330 | 0.8305 | 1.0851 | 1.0851 | 0.8469 | 0.8469 | 0.3356 | 0.3356 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0457 | 23.0 | 345 | 0.8369 | 1.0893 | 1.0893 | 0.8555 | 0.8555 | 0.3305 | 0.3305 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0496 | 24.0 | 360 | 0.8441 | 1.0940 | 1.0940 | 0.8648 | 0.8648 | 0.3247 | 0.3247 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0467 | 25.0 | 375 | 0.8470 | 1.0959 | 1.0959 | 0.8633 | 0.8633 | 0.3224 | 0.3224 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0446 | 26.0 | 390 | 0.8562 | 1.1018 | 1.1018 | 0.8708 | 0.8708 | 0.3151 | 0.3151 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0476 | 27.0 | 405 | 0.8600 | 1.1042 | 1.1042 | 0.8714 | 0.8714 | 0.3120 | 0.3120 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.042 | 28.0 | 420 | 0.8657 | 1.1079 | 1.1079 | 0.8763 | 0.8763 | 0.3074 | 0.3074 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.0431 | 29.0 | 435 | 0.8654 | 1.1077 | 1.1077 | 0.8734 | 0.8734 | 0.3077 | 0.3077 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
| 0.0423 | 30.0 | 450 | 0.8646 | 1.1072 | 1.1072 | 0.8721 | 0.8721 | 0.3083 | 0.3083 | 0.5652 | 0.0 | 0.5 | 0.5070 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
StevenLimcorn/indonesian-roberta-base-emotion-classifier | e8a9cb967bd7e5f41396c4dac6d1fc2dfa636cbf | 2021-08-25T14:33:16.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"id",
"dataset:indonlu",
"transformers",
"license:mit"
] | text-classification | false | StevenLimcorn | null | StevenLimcorn/indonesian-roberta-base-emotion-classifier | 340 | 2 | transformers | 2,742 | ---
language: id
tags:
- roberta
license: mit
datasets:
- indonlu
widget:
- text: "Hal-hal baik akan datang."
---
# Indo RoBERTa Emotion Classifier
Indo RoBERTa Emotion Classifier is emotion classifier based on [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) model. It was trained on the trained on [IndoNLU EmoT](https://huggingface.co/datasets/indonlu) dataset. The model used was [Indo-roberta](https://huggingface.co/flax-community/indonesian-roberta-base) and was transfer-learned to an emotion classifier model. Based from the [IndoNLU bencmark](https://www.indobenchmark.com/), the model achieve an f1-macro of 72.05%, accuracy of 71.81%, precision of 72.47% and recall of 71.94%.
## Model
The model was trained on 7 epochs with learning rate 2e-5. Achieved different metrics as shown below.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|----------|
| 1 | 1.300700 | 1.005149 | 0.622727 | 0.601846 | 0.640845 | 0.611144 |
| 2 | 0.806300 | 0.841953 | 0.686364 | 0.694096 | 0.701984 | 0.696657 |
| 3 | 0.591900 | 0.796794 | 0.686364 | 0.696573 | 0.707520 | 0.691671 |
| 4 | 0.441200 | 0.782094 | 0.722727 | 0.724359 | 0.725985 | 0.730229 |
| 5 | 0.334700 | 0.809931 | 0.711364 | 0.720550 | 0.718318 | 0.724608 |
| 6 | 0.268400 | 0.812771 | 0.718182 | 0.724192 | 0.721222 | 0.729195 |
| 7 | 0.226000 | 0.828461 | 0.725000 | 0.733625 | 0.731709 | 0.735800 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "StevenLimcorn/indonesian-roberta-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Hal-hal baik akan datang.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `EmoT` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Emotion Classifier was trained and evaluated by [Steven Limcorn](https://github.com/stevenlimcorn). All computation and development are done on Google Colaboratory using their free GPU access. |
m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition | a71bf01ccb1cfc182c37550938d78c958f18a5eb | 2021-07-27T06:12:46.000Z | [
"pytorch",
"wav2vec2",
"fa",
"dataset:ShEMO",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"speech-emotion-recognition",
"license:apache-2.0"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition | 340 | 3 | transformers | 2,743 | ---
language: fa
datasets:
- ShEMO
tags:
- audio
- automatic-speech-recognition
- speech
- speech-emotion-recognition
license: apache-2.0
---
# Emotion Recognition in Persian (Farsi - fa) Speech using Wav2Vec 2.0
## How to use
### Requirements
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
```
### Prediction
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchaudio
from transformers import AutoConfig, Wav2Vec2FeatureExtractor
import librosa
import IPython.display as ipd
import numpy as np
import pandas as pd
```
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name_or_path = "m3hrdadfi/wav2vec2-xlsr-persian-speech-emotion-recognition"
config = AutoConfig.from_pretrained(model_name_or_path)
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path)
sampling_rate = feature_extractor.sampling_rate
model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device)
```
```python
def speech_file_to_array_fn(path, sampling_rate):
speech_array, _sampling_rate = torchaudio.load(path)
resampler = torchaudio.transforms.Resample(_sampling_rate)
speech = resampler(speech_array).squeeze().numpy()
return speech
def predict(path, sampling_rate):
speech = speech_file_to_array_fn(path, sampling_rate)
inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to(device) for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)]
return outputs
```
```python
path = "/path/to/sadness.wav"
outputs = predict(path, sampling_rate)
```
```bash
[
{'Label': 'Anger', 'Score': '0.0%'},
{'Label': 'Fear', 'Score': '0.0%'},
{'Label': 'Happiness', 'Score': '0.0%'},
{'Label': 'Neutral', 'Score': '0.0%'},
{'Label': 'Sadness', 'Score': '99.9%'},
{'Label': 'Surprise', 'Score': '0.0%'}
]
```
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
| Emotions | precision | recall | f1-score | accuracy |
|:---------:|:---------:|:------:|:--------:|:--------:|
| Anger | 0.95 | 0.95 | 0.95 | |
| Fear | 0.33 | 0.17 | 0.22 | |
| Happiness | 0.69 | 0.69 | 0.69 | |
| Neutral | 0.91 | 0.94 | 0.93 | |
| Sadness | 0.92 | 0.85 | 0.88 | |
| Surprise | 0.81 | 0.88 | 0.84 | |
| | | | Overal | 0.90 |
## Questions?
Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues). |
nvidia/segformer-b3-finetuned-ade-512-512 | 2eaea9d7ab761a33872c47a5fe614cb65d3df1f3 | 2022-07-20T09:53:44.000Z | [
"pytorch",
"tf",
"segformer",
"transformers"
] | null | false | nvidia | null | nvidia/segformer-b3-finetuned-ade-512-512 | 340 | null | transformers | 2,744 | Entry not found |
zentos/DialoGPT-small-spongebob | a47a09e82d250a24a83034ef0b8f379468b08903 | 2021-09-08T22:24:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | zentos | null | zentos/DialoGPT-small-spongebob | 340 | null | transformers | 2,745 | ---
tags:
- conversational
---
#Sponge Bob DialoGPT Model |
responsibility-framing/predict-perception-bert-cause-none | a8de22856a1c4f8ef719e64156db49448920a1e8 | 2022-03-10T16:10:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-cause-none | 340 | null | transformers | 2,746 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-none
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6269
- Rmse: 1.2763
- Rmse Cause::a Spontanea, priva di un agente scatenante: 1.2763
- Mae: 1.0431
- Mae Cause::a Spontanea, priva di un agente scatenante: 1.0431
- R2: -1.4329
- R2 Cause::a Spontanea, priva di un agente scatenante: -1.4329
- Cos: -0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3371
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Spontanea, priva di un agente scatenante | Mae | Mae Cause::a Spontanea, priva di un agente scatenante | R2 | R2 Cause::a Spontanea, priva di un agente scatenante | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------------:|:------:|:-----------------------------------------------------:|:-------:|:----------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 0.994 | 1.0 | 15 | 0.7156 | 0.8465 | 0.8465 | 0.7809 | 0.7809 | -0.0701 | -0.0701 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 0.9757 | 2.0 | 30 | 0.7096 | 0.8429 | 0.8429 | 0.7666 | 0.7666 | -0.0611 | -0.0611 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0086 | 3.0 | 45 | 0.7779 | 0.8825 | 0.8825 | 0.7981 | 0.7981 | -0.1632 | -0.1632 | -0.0435 | 0.0 | 0.5 | 0.2899 | nan |
| 0.9127 | 4.0 | 60 | 0.8158 | 0.9038 | 0.9038 | 0.8171 | 0.8171 | -0.2199 | -0.2199 | -0.2174 | 0.0 | 0.5 | 0.2975 | nan |
| 0.8555 | 5.0 | 75 | 0.7691 | 0.8775 | 0.8775 | 0.8121 | 0.8121 | -0.1501 | -0.1501 | -0.2174 | 0.0 | 0.5 | 0.3299 | nan |
| 0.8702 | 6.0 | 90 | 0.7818 | 0.8848 | 0.8848 | 0.7781 | 0.7781 | -0.1691 | -0.1691 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.76 | 7.0 | 105 | 0.8377 | 0.9158 | 0.9158 | 0.7985 | 0.7985 | -0.2526 | -0.2526 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.6997 | 8.0 | 120 | 0.9065 | 0.9527 | 0.9527 | 0.8370 | 0.8370 | -0.3555 | -0.3555 | -0.2174 | 0.0 | 0.5 | 0.3147 | nan |
| 0.5963 | 9.0 | 135 | 1.0611 | 1.0308 | 1.0308 | 0.8396 | 0.8396 | -0.5867 | -0.5867 | -0.0435 | 0.0 | 0.5 | 0.2645 | nan |
| 0.5413 | 10.0 | 150 | 1.1724 | 1.0835 | 1.0835 | 0.8649 | 0.8649 | -0.7532 | -0.7532 | -0.0435 | 0.0 | 0.5 | 0.2645 | nan |
| 0.4994 | 11.0 | 165 | 1.1471 | 1.0717 | 1.0717 | 0.8857 | 0.8857 | -0.7154 | -0.7154 | -0.2174 | 0.0 | 0.5 | 0.3271 | nan |
| 0.4208 | 12.0 | 180 | 1.2136 | 1.1024 | 1.1024 | 0.9392 | 0.9392 | -0.8148 | -0.8148 | -0.2174 | 0.0 | 0.5 | 0.3169 | nan |
| 0.316 | 13.0 | 195 | 1.3499 | 1.1626 | 1.1626 | 0.9395 | 0.9395 | -1.0187 | -1.0187 | -0.2174 | 0.0 | 0.5 | 0.3271 | nan |
| 0.2893 | 14.0 | 210 | 1.4229 | 1.1937 | 1.1937 | 0.9608 | 0.9608 | -1.1278 | -1.1278 | -0.3043 | 0.0 | 0.5 | 0.3269 | nan |
| 0.235 | 15.0 | 225 | 1.4699 | 1.2132 | 1.2132 | 0.9785 | 0.9785 | -1.1981 | -1.1981 | -0.0435 | 0.0 | 0.5 | 0.2865 | nan |
| 0.2397 | 16.0 | 240 | 1.5492 | 1.2455 | 1.2455 | 1.0005 | 1.0005 | -1.3167 | -1.3167 | -0.0435 | 0.0 | 0.5 | 0.2655 | nan |
| 0.1973 | 17.0 | 255 | 1.5541 | 1.2474 | 1.2474 | 1.0165 | 1.0165 | -1.3239 | -1.3239 | -0.0435 | 0.0 | 0.5 | 0.2655 | nan |
| 0.1793 | 18.0 | 270 | 1.4966 | 1.2242 | 1.2242 | 1.0058 | 1.0058 | -1.2380 | -1.2380 | -0.3043 | 0.0 | 0.5 | 0.3437 | nan |
| 0.16 | 19.0 | 285 | 1.4977 | 1.2246 | 1.2246 | 1.0140 | 1.0140 | -1.2396 | -1.2396 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1501 | 20.0 | 300 | 1.5751 | 1.2558 | 1.2558 | 1.0254 | 1.0254 | -1.3553 | -1.3553 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1342 | 21.0 | 315 | 1.7011 | 1.3051 | 1.3051 | 1.0681 | 1.0681 | -1.5438 | -1.5438 | -0.2174 | 0.0 | 0.5 | 0.2715 | nan |
| 0.137 | 22.0 | 330 | 1.5557 | 1.2481 | 1.2481 | 1.0393 | 1.0393 | -1.3263 | -1.3263 | -0.3043 | 0.0 | 0.5 | 0.3437 | nan |
| 0.11 | 23.0 | 345 | 1.5475 | 1.2448 | 1.2448 | 1.0320 | 1.0320 | -1.3141 | -1.3141 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.1106 | 24.0 | 360 | 1.6006 | 1.2660 | 1.2660 | 1.0452 | 1.0452 | -1.3936 | -1.3936 | -0.3913 | 0.0 | 0.5 | 0.3297 | nan |
| 0.1013 | 25.0 | 375 | 1.5907 | 1.2621 | 1.2621 | 1.0368 | 1.0368 | -1.3787 | -1.3787 | -0.3043 | 0.0 | 0.5 | 0.2929 | nan |
| 0.0863 | 26.0 | 390 | 1.6436 | 1.2829 | 1.2829 | 1.0496 | 1.0496 | -1.4578 | -1.4578 | -0.3043 | 0.0 | 0.5 | 0.2929 | nan |
| 0.0929 | 27.0 | 405 | 1.6000 | 1.2658 | 1.2658 | 1.0341 | 1.0341 | -1.3927 | -1.3927 | -0.3043 | 0.0 | 0.5 | 0.3245 | nan |
| 0.0829 | 28.0 | 420 | 1.6277 | 1.2767 | 1.2767 | 1.0422 | 1.0422 | -1.4341 | -1.4341 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.0884 | 29.0 | 435 | 1.6324 | 1.2785 | 1.2785 | 1.0436 | 1.0436 | -1.4411 | -1.4411 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
| 0.0896 | 30.0 | 450 | 1.6269 | 1.2763 | 1.2763 | 1.0431 | 1.0431 | -1.4329 | -1.4329 | -0.3913 | 0.0 | 0.5 | 0.3371 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ibm/qcpg-sentences | d4deefe3a028ded254d8946d444ab1d1c684689f | 2022-05-18T10:58:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ibm | null | ibm/qcpg-sentences | 340 | null | transformers | 2,747 | Entry not found |
Rostlab/prot_bert_bfd_localization | b31c50abeea9ac246cb7376412d68c2de29c72e1 | 2021-05-18T22:05:26.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Rostlab | null | Rostlab/prot_bert_bfd_localization | 339 | null | transformers | 2,748 | Entry not found |
responsibility-framing/predict-perception-bert-blame-assassin | 3ea9f6411849b8839ba252941db7c09e016e8d3c | 2022-03-10T15:44:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-blame-assassin | 339 | null | transformers | 2,749 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-assassin
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5128
- Rmse: 1.0287
- Rmse Blame::a L'assassino: 1.0287
- Mae: 0.8883
- Mae Blame::a L'assassino: 0.8883
- R2: 0.5883
- R2 Blame::a L'assassino: 0.5883
- Cos: 0.6522
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5795
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0184 | 1.0 | 15 | 1.2219 | 1.5879 | 1.5879 | 1.4308 | 1.4308 | 0.0191 | 0.0191 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan |
| 0.9214 | 2.0 | 30 | 1.0927 | 1.5017 | 1.5017 | 1.3634 | 1.3634 | 0.1227 | 0.1227 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan |
| 0.7809 | 3.0 | 45 | 0.8206 | 1.3013 | 1.3013 | 1.1808 | 1.1808 | 0.3412 | 0.3412 | 0.4783 | 0.0 | 0.5 | 0.3819 | nan |
| 0.6593 | 4.0 | 60 | 0.5894 | 1.1029 | 1.1029 | 1.0145 | 1.0145 | 0.5268 | 0.5268 | 0.7391 | 0.0 | 0.5 | 0.6408 | nan |
| 0.4672 | 5.0 | 75 | 0.4759 | 0.9910 | 0.9910 | 0.8868 | 0.8868 | 0.6180 | 0.6180 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.3356 | 6.0 | 90 | 0.4220 | 0.9332 | 0.9332 | 0.8083 | 0.8083 | 0.6612 | 0.6612 | 0.6522 | 0.0 | 0.5 | 0.4249 | nan |
| 0.2782 | 7.0 | 105 | 0.4477 | 0.9612 | 0.9612 | 0.8046 | 0.8046 | 0.6406 | 0.6406 | 0.6522 | 0.0 | 0.5 | 0.6101 | nan |
| 0.2075 | 8.0 | 120 | 0.4389 | 0.9518 | 0.9518 | 0.8050 | 0.8050 | 0.6476 | 0.6476 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.1725 | 9.0 | 135 | 0.4832 | 0.9985 | 0.9985 | 0.8356 | 0.8356 | 0.6121 | 0.6121 | 0.7391 | 0.0 | 0.5 | 0.6616 | nan |
| 0.1642 | 10.0 | 150 | 0.4368 | 0.9494 | 0.9494 | 0.8060 | 0.8060 | 0.6493 | 0.6493 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.1172 | 11.0 | 165 | 0.4538 | 0.9677 | 0.9677 | 0.8174 | 0.8174 | 0.6357 | 0.6357 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.104 | 12.0 | 180 | 0.4672 | 0.9819 | 0.9819 | 0.8384 | 0.8384 | 0.6249 | 0.6249 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0822 | 13.0 | 195 | 0.4401 | 0.9530 | 0.9530 | 0.8107 | 0.8107 | 0.6467 | 0.6467 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0755 | 14.0 | 210 | 0.4464 | 0.9598 | 0.9598 | 0.8251 | 0.8251 | 0.6416 | 0.6416 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0801 | 15.0 | 225 | 0.4834 | 0.9988 | 0.9988 | 0.8604 | 0.8604 | 0.6119 | 0.6119 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.053 | 16.0 | 240 | 0.4846 | 1.0001 | 1.0001 | 0.8651 | 0.8651 | 0.6109 | 0.6109 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0573 | 17.0 | 255 | 0.4970 | 1.0128 | 1.0128 | 0.8743 | 0.8743 | 0.6010 | 0.6010 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0571 | 18.0 | 270 | 0.4803 | 0.9956 | 0.9956 | 0.8503 | 0.8503 | 0.6144 | 0.6144 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0483 | 19.0 | 285 | 0.4936 | 1.0093 | 1.0093 | 0.8740 | 0.8740 | 0.6037 | 0.6037 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0414 | 20.0 | 300 | 0.5138 | 1.0297 | 1.0297 | 0.8943 | 0.8943 | 0.5875 | 0.5875 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0513 | 21.0 | 315 | 0.5240 | 1.0399 | 1.0399 | 0.9050 | 0.9050 | 0.5793 | 0.5793 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0499 | 22.0 | 330 | 0.5275 | 1.0434 | 1.0434 | 0.9048 | 0.9048 | 0.5765 | 0.5765 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0423 | 23.0 | 345 | 0.5350 | 1.0508 | 1.0508 | 0.8872 | 0.8872 | 0.5705 | 0.5705 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0447 | 24.0 | 360 | 0.4963 | 1.0120 | 1.0120 | 0.8754 | 0.8754 | 0.6016 | 0.6016 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0364 | 25.0 | 375 | 0.5009 | 1.0167 | 1.0167 | 0.8809 | 0.8809 | 0.5979 | 0.5979 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0412 | 26.0 | 390 | 0.5060 | 1.0219 | 1.0219 | 0.8781 | 0.8781 | 0.5938 | 0.5938 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0297 | 27.0 | 405 | 0.5027 | 1.0185 | 1.0185 | 0.8838 | 0.8838 | 0.5964 | 0.5964 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0416 | 28.0 | 420 | 0.5071 | 1.0230 | 1.0230 | 0.8867 | 0.8867 | 0.5929 | 0.5929 | 0.7391 | 0.0 | 0.5 | 0.4884 | nan |
| 0.0327 | 29.0 | 435 | 0.5124 | 1.0283 | 1.0283 | 0.8883 | 0.8883 | 0.5887 | 0.5887 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
| 0.0383 | 30.0 | 450 | 0.5128 | 1.0287 | 1.0287 | 0.8883 | 0.8883 | 0.5883 | 0.5883 | 0.6522 | 0.0 | 0.5 | 0.5795 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-blame-victim | a73e482897190756414bb0a897c405bf9e5aacf7 | 2022-03-10T15:48:51.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-blame-victim | 339 | null | transformers | 2,750 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-victim
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5075
- Rmse: 0.4599
- Rmse Blame::a La vittima: 0.4599
- Mae: 0.3607
- Mae Blame::a La vittima: 0.3607
- R2: -0.1848
- R2 Blame::a La vittima: -0.1848
- Cos: 0.2174
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.2924
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0264 | 1.0 | 15 | 0.4334 | 0.4250 | 0.4250 | 0.3666 | 0.3666 | -0.0119 | -0.0119 | 0.1304 | 0.0 | 0.5 | 0.2703 | nan |
| 0.9814 | 2.0 | 30 | 0.4505 | 0.4333 | 0.4333 | 0.3744 | 0.3744 | -0.0517 | -0.0517 | 0.2174 | 0.0 | 0.5 | 0.2751 | nan |
| 0.9283 | 3.0 | 45 | 0.4349 | 0.4257 | 0.4257 | 0.3627 | 0.3627 | -0.0152 | -0.0152 | 0.1304 | 0.0 | 0.5 | 0.2779 | nan |
| 0.8904 | 4.0 | 60 | 0.4662 | 0.4408 | 0.4408 | 0.3773 | 0.3773 | -0.0884 | -0.0884 | -0.0435 | 0.0 | 0.5 | 0.2681 | nan |
| 0.836 | 5.0 | 75 | 0.4188 | 0.4177 | 0.4177 | 0.3609 | 0.3609 | 0.0223 | 0.0223 | 0.2174 | 0.0 | 0.5 | 0.3051 | nan |
| 0.8293 | 6.0 | 90 | 0.4142 | 0.4155 | 0.4155 | 0.3512 | 0.3512 | 0.0330 | 0.0330 | 0.2174 | 0.0 | 0.5 | 0.3220 | nan |
| 0.7629 | 7.0 | 105 | 0.3837 | 0.3999 | 0.3999 | 0.3387 | 0.3387 | 0.1041 | 0.1041 | 0.2174 | 0.0 | 0.5 | 0.3051 | nan |
| 0.7266 | 8.0 | 120 | 0.3664 | 0.3907 | 0.3907 | 0.3250 | 0.3250 | 0.1446 | 0.1446 | 0.3043 | 0.0 | 0.5 | 0.3409 | nan |
| 0.6121 | 9.0 | 135 | 0.3718 | 0.3936 | 0.3936 | 0.3312 | 0.3312 | 0.1320 | 0.1320 | 0.3043 | 0.0 | 0.5 | 0.3983 | nan |
| 0.5694 | 10.0 | 150 | 0.3679 | 0.3915 | 0.3915 | 0.3197 | 0.3197 | 0.1411 | 0.1411 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan |
| 0.4647 | 11.0 | 165 | 0.3868 | 0.4015 | 0.4015 | 0.3340 | 0.3340 | 0.0970 | 0.0970 | 0.2174 | 0.0 | 0.5 | 0.3285 | nan |
| 0.4212 | 12.0 | 180 | 0.3717 | 0.3936 | 0.3936 | 0.3188 | 0.3188 | 0.1322 | 0.1322 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan |
| 0.3605 | 13.0 | 195 | 0.3437 | 0.3784 | 0.3784 | 0.3066 | 0.3066 | 0.1976 | 0.1976 | 0.3043 | 0.0 | 0.5 | 0.3423 | nan |
| 0.2759 | 14.0 | 210 | 0.3892 | 0.4027 | 0.4027 | 0.3230 | 0.3230 | 0.0914 | 0.0914 | 0.3913 | 0.0 | 0.5 | 0.3518 | nan |
| 0.2868 | 15.0 | 225 | 0.3720 | 0.3937 | 0.3937 | 0.3218 | 0.3218 | 0.1315 | 0.1315 | 0.3913 | 0.0 | 0.5 | 0.3440 | nan |
| 0.2467 | 16.0 | 240 | 0.3881 | 0.4022 | 0.4022 | 0.3291 | 0.3291 | 0.0939 | 0.0939 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan |
| 0.2013 | 17.0 | 255 | 0.4121 | 0.4144 | 0.4144 | 0.3373 | 0.3373 | 0.0380 | 0.0380 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan |
| 0.1966 | 18.0 | 270 | 0.4808 | 0.4476 | 0.4476 | 0.3506 | 0.3506 | -0.1224 | -0.1224 | 0.3913 | 0.0 | 0.5 | 0.3214 | nan |
| 0.177 | 19.0 | 285 | 0.4263 | 0.4215 | 0.4215 | 0.3398 | 0.3398 | 0.0046 | 0.0046 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1589 | 20.0 | 300 | 0.4274 | 0.4220 | 0.4220 | 0.3363 | 0.3363 | 0.0022 | 0.0022 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1488 | 21.0 | 315 | 0.4548 | 0.4353 | 0.4353 | 0.3431 | 0.3431 | -0.0618 | -0.0618 | 0.3043 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1428 | 22.0 | 330 | 0.4405 | 0.4285 | 0.4285 | 0.3417 | 0.3417 | -0.0285 | -0.0285 | 0.3043 | 0.0 | 0.5 | 0.3363 | nan |
| 0.1294 | 23.0 | 345 | 0.4955 | 0.4544 | 0.4544 | 0.3565 | 0.3565 | -0.1568 | -0.1568 | 0.3913 | 0.0 | 0.5 | 0.3440 | nan |
| 0.1291 | 24.0 | 360 | 0.4861 | 0.4501 | 0.4501 | 0.3529 | 0.3529 | -0.1348 | -0.1348 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1187 | 25.0 | 375 | 0.4752 | 0.4450 | 0.4450 | 0.3518 | 0.3518 | -0.1095 | -0.1095 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1141 | 26.0 | 390 | 0.5131 | 0.4624 | 0.4624 | 0.3598 | 0.3598 | -0.1978 | -0.1978 | 0.3043 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1094 | 27.0 | 405 | 0.4863 | 0.4502 | 0.4502 | 0.3547 | 0.3547 | -0.1353 | -0.1353 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.0925 | 28.0 | 420 | 0.4900 | 0.4519 | 0.4519 | 0.3564 | 0.3564 | -0.1439 | -0.1439 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.108 | 29.0 | 435 | 0.5019 | 0.4573 | 0.4573 | 0.3590 | 0.3590 | -0.1719 | -0.1719 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
| 0.1054 | 30.0 | 450 | 0.5075 | 0.4599 | 0.4599 | 0.3607 | 0.3607 | -0.1848 | -0.1848 | 0.2174 | 0.0 | 0.5 | 0.2924 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-focus-object | 4d12536579489e2152a762b8038809581d8a7526 | 2022-03-10T16:21:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-focus-object | 339 | null | transformers | 2,751 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-focus-object
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- Rmse: 0.5965
- Rmse Focus::a Su un oggetto: 0.5965
- Mae: 0.4372
- Mae Focus::a Su un oggetto: 0.4372
- R2: 0.4957
- R2 Focus::a Su un oggetto: 0.4957
- Cos: 0.6522
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6622
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0371 | 1.0 | 15 | 0.4358 | 0.8263 | 0.8263 | 0.7132 | 0.7132 | 0.0323 | 0.0323 | 0.3043 | 0.0 | 0.5 | 0.3510 | nan |
| 0.9574 | 2.0 | 30 | 0.4420 | 0.8321 | 0.8321 | 0.7175 | 0.7175 | 0.0186 | 0.0186 | 0.3043 | 0.0 | 0.5 | 0.4627 | nan |
| 0.9137 | 3.0 | 45 | 0.4208 | 0.8119 | 0.8119 | 0.6955 | 0.6955 | 0.0657 | 0.0657 | 0.3913 | 0.0 | 0.5 | 0.3928 | nan |
| 0.8465 | 4.0 | 60 | 0.3356 | 0.7251 | 0.7251 | 0.6237 | 0.6237 | 0.2548 | 0.2548 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.6864 | 5.0 | 75 | 0.2876 | 0.6712 | 0.6712 | 0.5624 | 0.5624 | 0.3616 | 0.3616 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.5804 | 6.0 | 90 | 0.3148 | 0.7022 | 0.7022 | 0.5577 | 0.5577 | 0.3011 | 0.3011 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.4983 | 7.0 | 105 | 0.4068 | 0.7983 | 0.7983 | 0.6606 | 0.6606 | 0.0968 | 0.0968 | 0.3913 | 0.0 | 0.5 | 0.4519 | nan |
| 0.3584 | 8.0 | 120 | 0.2567 | 0.6342 | 0.6342 | 0.4883 | 0.4883 | 0.4300 | 0.4300 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2771 | 9.0 | 135 | 0.2130 | 0.5777 | 0.5777 | 0.4193 | 0.4193 | 0.5270 | 0.5270 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.2135 | 10.0 | 150 | 0.2522 | 0.6285 | 0.6285 | 0.4572 | 0.4572 | 0.4401 | 0.4401 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1654 | 11.0 | 165 | 0.2662 | 0.6457 | 0.6457 | 0.4603 | 0.4603 | 0.4090 | 0.4090 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1554 | 12.0 | 180 | 0.2459 | 0.6207 | 0.6207 | 0.4778 | 0.4778 | 0.4540 | 0.4540 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1195 | 13.0 | 195 | 0.2385 | 0.6113 | 0.6113 | 0.4618 | 0.4618 | 0.4704 | 0.4704 | 0.5652 | 0.0 | 0.5 | 0.5693 | nan |
| 0.1046 | 14.0 | 210 | 0.2296 | 0.5997 | 0.5997 | 0.4544 | 0.4544 | 0.4903 | 0.4903 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.089 | 15.0 | 225 | 0.2520 | 0.6283 | 0.6283 | 0.4974 | 0.4974 | 0.4404 | 0.4404 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.083 | 16.0 | 240 | 0.2297 | 0.5998 | 0.5998 | 0.4635 | 0.4635 | 0.4901 | 0.4901 | 0.5652 | 0.0 | 0.5 | 0.5610 | nan |
| 0.0701 | 17.0 | 255 | 0.2207 | 0.5879 | 0.5879 | 0.4442 | 0.4442 | 0.5101 | 0.5101 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0585 | 18.0 | 270 | 0.2397 | 0.6128 | 0.6128 | 0.4617 | 0.4617 | 0.4678 | 0.4678 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0652 | 19.0 | 285 | 0.2284 | 0.5981 | 0.5981 | 0.4449 | 0.4449 | 0.4929 | 0.4929 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.059 | 20.0 | 300 | 0.2491 | 0.6247 | 0.6247 | 0.4599 | 0.4599 | 0.4469 | 0.4469 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0464 | 21.0 | 315 | 0.2306 | 0.6010 | 0.6010 | 0.4373 | 0.4373 | 0.4880 | 0.4880 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0529 | 22.0 | 330 | 0.2370 | 0.6093 | 0.6093 | 0.4480 | 0.4480 | 0.4738 | 0.4738 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0555 | 23.0 | 345 | 0.2361 | 0.6082 | 0.6082 | 0.4474 | 0.4474 | 0.4757 | 0.4757 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0447 | 24.0 | 360 | 0.2283 | 0.5980 | 0.5980 | 0.4399 | 0.4399 | 0.4932 | 0.4932 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.046 | 25.0 | 375 | 0.2259 | 0.5948 | 0.5948 | 0.4413 | 0.4413 | 0.4985 | 0.4985 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0379 | 26.0 | 390 | 0.2263 | 0.5953 | 0.5953 | 0.4402 | 0.4402 | 0.4977 | 0.4977 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0438 | 27.0 | 405 | 0.2270 | 0.5963 | 0.5963 | 0.4378 | 0.4378 | 0.4961 | 0.4961 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0354 | 28.0 | 420 | 0.2211 | 0.5886 | 0.5886 | 0.4379 | 0.4379 | 0.5090 | 0.5090 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0363 | 29.0 | 435 | 0.2269 | 0.5962 | 0.5962 | 0.4362 | 0.4362 | 0.4961 | 0.4961 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0451 | 30.0 | 450 | 0.2271 | 0.5965 | 0.5965 | 0.4372 | 0.4372 | 0.4957 | 0.4957 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-object | 6700d00ae2216915a77bcfbf253917599a7998ff | 2022-03-15T22:42:55.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-blame-object | 339 | null | transformers | 2,752 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
- Rmse: 0.6215
- Rmse Blame::a Un oggetto: 0.6215
- Mae: 0.4130
- Mae Blame::a Un oggetto: 0.4130
- R2: 0.1200
- R2 Blame::a Un oggetto: 0.1200
- Cos: 0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4335
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un oggetto | Mae | Mae Blame::a Un oggetto | R2 | R2 Blame::a Un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0279 | 1.0 | 15 | 0.8483 | 0.6737 | 0.6737 | 0.4761 | 0.4761 | -0.0341 | -0.0341 | -0.3043 | 0.0 | 0.5 | 0.5507 | nan |
| 1.0676 | 2.0 | 30 | 0.7749 | 0.6439 | 0.6439 | 0.4291 | 0.4291 | 0.0554 | 0.0554 | 0.0435 | 0.0 | 0.5 | 0.2614 | nan |
| 0.9563 | 3.0 | 45 | 0.7765 | 0.6446 | 0.6446 | 0.4349 | 0.4349 | 0.0535 | 0.0535 | -0.0435 | 0.0 | 0.5 | 0.4515 | nan |
| 0.9622 | 4.0 | 60 | 0.7443 | 0.6311 | 0.6311 | 0.4061 | 0.4061 | 0.0927 | 0.0927 | 0.1304 | 0.0 | 0.5 | 0.2933 | nan |
| 0.948 | 5.0 | 75 | 0.8071 | 0.6571 | 0.6571 | 0.3817 | 0.3817 | 0.0162 | 0.0162 | 0.3043 | 0.0 | 0.5 | 0.4207 | nan |
| 0.9532 | 6.0 | 90 | 0.8007 | 0.6546 | 0.6546 | 0.4585 | 0.4585 | 0.0239 | 0.0239 | -0.0435 | 0.0 | 0.5 | 0.5507 | nan |
| 0.9101 | 7.0 | 105 | 0.7126 | 0.6175 | 0.6175 | 0.3649 | 0.3649 | 0.1313 | 0.1313 | 0.4783 | 0.0 | 0.5 | 0.6012 | nan |
| 0.8369 | 8.0 | 120 | 0.7194 | 0.6204 | 0.6204 | 0.3896 | 0.3896 | 0.1231 | 0.1231 | 0.3913 | 0.0 | 0.5 | 0.3494 | nan |
| 0.8062 | 9.0 | 135 | 0.7157 | 0.6188 | 0.6188 | 0.4192 | 0.4192 | 0.1275 | 0.1275 | 0.0435 | 0.0 | 0.5 | 0.3182 | nan |
| 0.7344 | 10.0 | 150 | 0.7161 | 0.6190 | 0.6190 | 0.3612 | 0.3612 | 0.1270 | 0.1270 | 0.3043 | 0.0 | 0.5 | 0.6035 | nan |
| 0.7439 | 11.0 | 165 | 0.5894 | 0.5616 | 0.5616 | 0.3723 | 0.3723 | 0.2816 | 0.2816 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.6241 | 12.0 | 180 | 0.7087 | 0.6158 | 0.6158 | 0.3972 | 0.3972 | 0.1361 | 0.1361 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.6123 | 13.0 | 195 | 0.6318 | 0.5814 | 0.5814 | 0.3673 | 0.3673 | 0.2298 | 0.2298 | 0.3913 | 0.0 | 0.5 | 0.4413 | nan |
| 0.5364 | 14.0 | 210 | 0.6504 | 0.5899 | 0.5899 | 0.3674 | 0.3674 | 0.2072 | 0.2072 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.5586 | 15.0 | 225 | 0.7151 | 0.6186 | 0.6186 | 0.3850 | 0.3850 | 0.1283 | 0.1283 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.5133 | 16.0 | 240 | 0.5572 | 0.5460 | 0.5460 | 0.3540 | 0.3540 | 0.3208 | 0.3208 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan |
| 0.4193 | 17.0 | 255 | 0.6047 | 0.5688 | 0.5688 | 0.3710 | 0.3710 | 0.2629 | 0.2629 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3504 | 18.0 | 270 | 0.6103 | 0.5714 | 0.5714 | 0.3687 | 0.3687 | 0.2561 | 0.2561 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3328 | 19.0 | 285 | 0.6181 | 0.5751 | 0.5751 | 0.3915 | 0.3915 | 0.2466 | 0.2466 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan |
| 0.3276 | 20.0 | 300 | 0.6334 | 0.5822 | 0.5822 | 0.3612 | 0.3612 | 0.2279 | 0.2279 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3271 | 21.0 | 315 | 0.6200 | 0.5760 | 0.5760 | 0.3827 | 0.3827 | 0.2442 | 0.2442 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.3139 | 22.0 | 330 | 0.6332 | 0.5821 | 0.5821 | 0.3723 | 0.3723 | 0.2281 | 0.2281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2872 | 23.0 | 345 | 0.6694 | 0.5985 | 0.5985 | 0.3966 | 0.3966 | 0.1840 | 0.1840 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3617 | 24.0 | 360 | 0.7022 | 0.6130 | 0.6130 | 0.4061 | 0.4061 | 0.1440 | 0.1440 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3227 | 25.0 | 375 | 0.7364 | 0.6277 | 0.6277 | 0.4205 | 0.4205 | 0.1024 | 0.1024 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.256 | 26.0 | 390 | 0.6938 | 0.6093 | 0.6093 | 0.3833 | 0.3833 | 0.1543 | 0.1543 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2605 | 27.0 | 405 | 0.7221 | 0.6216 | 0.6216 | 0.4036 | 0.4036 | 0.1198 | 0.1198 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.2558 | 28.0 | 420 | 0.6959 | 0.6102 | 0.6102 | 0.3859 | 0.3859 | 0.1518 | 0.1518 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2403 | 29.0 | 435 | 0.7152 | 0.6186 | 0.6186 | 0.4088 | 0.4088 | 0.1281 | 0.1281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3263 | 30.0 | 450 | 0.7219 | 0.6215 | 0.6215 | 0.4130 | 0.4130 | 0.1200 | 0.1200 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-cause-concept | ff201f4a385b0e7e2ce6ca07e89af4d863508a88 | 2022-03-15T23:38:55.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-cause-concept | 339 | null | transformers | 2,753 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3933
- Rmse: 0.5992
- Rmse Cause::a Causata da un concetto astratto (es. gelosia): 0.5992
- Mae: 0.4566
- Mae Cause::a Causata da un concetto astratto (es. gelosia): 0.4566
- R2: 0.5588
- R2 Cause::a Causata da un concetto astratto (es. gelosia): 0.5588
- Cos: 0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4340
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un concetto astratto (es. gelosia) | Mae | Mae Cause::a Causata da un concetto astratto (es. gelosia) | R2 | R2 Cause::a Causata da un concetto astratto (es. gelosia) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------:|:------:|:----------------------------------------------------------:|:-------:|:---------------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0114 | 1.0 | 15 | 0.9088 | 0.9109 | 0.9109 | 0.6455 | 0.6455 | -0.0195 | -0.0195 | -0.0435 | 0.0 | 0.5 | 0.4027 | nan |
| 1.0 | 2.0 | 30 | 0.8833 | 0.8980 | 0.8980 | 0.6104 | 0.6104 | 0.0090 | 0.0090 | 0.2174 | 0.0 | 0.5 | 0.3681 | nan |
| 0.9533 | 3.0 | 45 | 0.8453 | 0.8785 | 0.8785 | 0.6072 | 0.6072 | 0.0517 | 0.0517 | 0.1304 | 0.0 | 0.5 | 0.3748 | nan |
| 0.9113 | 4.0 | 60 | 0.7797 | 0.8437 | 0.8437 | 0.6024 | 0.6024 | 0.1253 | 0.1253 | 0.0435 | 0.0 | 0.5 | 0.3028 | nan |
| 0.8312 | 5.0 | 75 | 0.5756 | 0.7249 | 0.7249 | 0.5128 | 0.5128 | 0.3542 | 0.3542 | 0.4783 | 0.0 | 0.5 | 0.4572 | nan |
| 0.7224 | 6.0 | 90 | 0.4977 | 0.6741 | 0.6741 | 0.5114 | 0.5114 | 0.4416 | 0.4416 | 0.2174 | 0.0 | 0.5 | 0.4009 | nan |
| 0.5789 | 7.0 | 105 | 0.6338 | 0.7607 | 0.7607 | 0.5059 | 0.5059 | 0.2889 | 0.2889 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.4978 | 8.0 | 120 | 0.3342 | 0.5524 | 0.5524 | 0.4298 | 0.4298 | 0.6250 | 0.6250 | 0.2174 | 0.0 | 0.5 | 0.4274 | nan |
| 0.4572 | 9.0 | 135 | 0.3210 | 0.5413 | 0.5413 | 0.4343 | 0.4343 | 0.6399 | 0.6399 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.3346 | 10.0 | 150 | 0.3456 | 0.5617 | 0.5617 | 0.4198 | 0.4198 | 0.6123 | 0.6123 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.3046 | 11.0 | 165 | 0.3840 | 0.5921 | 0.5921 | 0.4312 | 0.4312 | 0.5692 | 0.5692 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.3035 | 12.0 | 180 | 0.3929 | 0.5989 | 0.5989 | 0.4147 | 0.4147 | 0.5592 | 0.5592 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.2199 | 13.0 | 195 | 0.3165 | 0.5376 | 0.5376 | 0.4065 | 0.4065 | 0.6449 | 0.6449 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.2376 | 14.0 | 210 | 0.3108 | 0.5326 | 0.5326 | 0.3937 | 0.3937 | 0.6514 | 0.6514 | 0.3913 | 0.0 | 0.5 | 0.4286 | nan |
| 0.1639 | 15.0 | 225 | 0.3645 | 0.5769 | 0.5769 | 0.4094 | 0.4094 | 0.5911 | 0.5911 | 0.3913 | 0.0 | 0.5 | 0.4286 | nan |
| 0.1884 | 16.0 | 240 | 0.3762 | 0.5860 | 0.5860 | 0.4398 | 0.4398 | 0.5779 | 0.5779 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1767 | 17.0 | 255 | 0.3805 | 0.5894 | 0.5894 | 0.4540 | 0.4540 | 0.5732 | 0.5732 | 0.2174 | 0.0 | 0.5 | 0.4298 | nan |
| 0.1329 | 18.0 | 270 | 0.3555 | 0.5697 | 0.5697 | 0.4281 | 0.4281 | 0.6011 | 0.6011 | 0.2174 | 0.0 | 0.5 | 0.4298 | nan |
| 0.1834 | 19.0 | 285 | 0.4337 | 0.6292 | 0.6292 | 0.4402 | 0.4402 | 0.5135 | 0.5135 | 0.3913 | 0.0 | 0.5 | 0.4286 | nan |
| 0.1538 | 20.0 | 300 | 0.3554 | 0.5696 | 0.5696 | 0.4236 | 0.4236 | 0.6013 | 0.6013 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1459 | 21.0 | 315 | 0.3592 | 0.5726 | 0.5726 | 0.4348 | 0.4348 | 0.5971 | 0.5971 | 0.3043 | 0.0 | 0.5 | 0.4066 | nan |
| 0.1038 | 22.0 | 330 | 0.3732 | 0.5837 | 0.5837 | 0.4382 | 0.4382 | 0.5813 | 0.5813 | 0.3913 | 0.0 | 0.5 | 0.4664 | nan |
| 0.1432 | 23.0 | 345 | 0.3635 | 0.5760 | 0.5760 | 0.4394 | 0.4394 | 0.5922 | 0.5922 | 0.3913 | 0.0 | 0.5 | 0.4664 | nan |
| 0.1354 | 24.0 | 360 | 0.4359 | 0.6308 | 0.6308 | 0.4793 | 0.4793 | 0.5110 | 0.5110 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1404 | 25.0 | 375 | 0.3919 | 0.5982 | 0.5982 | 0.4650 | 0.4650 | 0.5603 | 0.5603 | 0.3913 | 0.0 | 0.5 | 0.4664 | nan |
| 0.103 | 26.0 | 390 | 0.4223 | 0.6209 | 0.6209 | 0.4691 | 0.4691 | 0.5263 | 0.5263 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1733 | 27.0 | 405 | 0.3972 | 0.6021 | 0.6021 | 0.4591 | 0.4591 | 0.5544 | 0.5544 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1019 | 28.0 | 420 | 0.3958 | 0.6011 | 0.6011 | 0.4593 | 0.4593 | 0.5559 | 0.5559 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.1076 | 29.0 | 435 | 0.4015 | 0.6054 | 0.6054 | 0.4589 | 0.4589 | 0.5496 | 0.5496 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
| 0.0999 | 30.0 | 450 | 0.3933 | 0.5992 | 0.5992 | 0.4566 | 0.4566 | 0.5588 | 0.5588 | 0.3043 | 0.0 | 0.5 | 0.4340 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
danyaljj/gpt2_question_answering_squad2 | 631c9eb1862b1218724a63c70b9facbc2542108d | 2021-06-17T17:49:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | danyaljj | null | danyaljj/gpt2_question_answering_squad2 | 338 | null | transformers | 2,754 | Sample usage:
```python
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_answering_squad2")
input_ids = tokenizer.encode("There are two apples on the counter. Q: How many apples? A:", return_tensors="pt")
outputs = model.generate(input_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Which should produce this:
```
Generated: There are two apples on the counter. Q: How many apples? A: two
``` |
flax-community/gpt2-medium-persian | 5810babdec1f4c68888f2d80a7c2ab6e8aeb6fe0 | 2021-07-16T13:01:08.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"fa",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt2-medium-persian | 338 | null | transformers | 2,755 | ---
language: fa
tags:
- text-generation
widget:
- text: "در یک اتفاق شگفت انگیز، پژوهشگران"
- text: "گرفتگی بینی در کودکان و بهخصوص نوزادان باعث میشود"
- text: "امیدواریم نوروز امسال سالی"
---
# GPT2 Medium 4 Persian
> This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-persian/7560), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
## Team Members
- [Mehrdad Farahani](huggingface.co/m3hrdadfi)
- [Saied Alimoradi](https://discuss.huggingface.co/u/saied)
- [M. Reza Zerehpoosh](huggingface.co/ironcladgeek)
- [Hooman Sedghamiz](https://discuss.huggingface.co/u/hooman650)
- [Mazeyar Moeini Feizabadi](https://discuss.huggingface.co/u/mazy1998)
## Dataset
We used [Oscar](https://huggingface.co/datasets/oscar) dataset, which is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus.
## How To Use
You can use this model directly with a pipeline for text generation.
```python
from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel
tokenizer = AutoTokenizer.from_pretrained('flax-community/gpt2-medium-persian')
model = GPT2LMHeadModel.from_pretrained('flax-community/gpt2-medium-persian')
generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
generated_text = generator('در یک اتفاق شگفت انگیز، پژوهشگران')
```
For using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.
## Demo
... SOON
## Evaluation
... SOON |
huggingtweets/_holyweather | 3fdac1ef6a2efe8ea9bcaabd6c911564bdb93e53 | 2021-05-21T17:05:00.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/_holyweather | 338 | null | transformers | 2,756 | ---
language: en
thumbnail: https://www.huggingtweets.com/_holyweather/1616723668078/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374991670681333762/u08Y3tfI_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">💎holyweather🌞 🤖 AI Bot </div>
<div style="font-size: 15px">@_holyweather bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@_holyweather's tweets](https://twitter.com/_holyweather).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 118 |
| Short tweets | 436 |
| Tweets kept | 2694 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qxgxfui/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_holyweather's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mw1gjki) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mw1gjki/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_holyweather')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
junnyu/ChineseBERT-base | a25dde763381455083c42f923e21ac4f336de317 | 2022-03-12T03:05:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2106.16038",
"transformers",
"glycebert",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/ChineseBERT-base | 338 | null | transformers | 2,757 | ---
language: zh
tags:
- glycebert
inference: False
---
# https://github.com/JunnYu/ChineseBert_pytorch
# ChineseBert_pytorch
本项目主要自定义了tokenization_chinesebert_fast.py文件中的ChineseBertTokenizerFast代码。从而可以从huggingface.co调用。
```python
pretrained_tokenizer_name = "junnyu/ChineseBERT-base"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_tokenizer_name)
```
# Paper
**[ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://arxiv.org/pdf/2106.16038.pdf)**
*Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu and Jiwei Li*
# Install
```bash
pip install chinesebert
or
pip install git+https://github.com/JunnYu/ChineseBert_pytorch.git
```
# Usage
```python
import torch
from chinesebert import ChineseBertForMaskedLM, ChineseBertTokenizerFast, ChineseBertConfig
pretrained_model_name = "junnyu/ChineseBERT-base"
tokenizer = ChineseBertTokenizerFast.from_pretrained(pretrained_model_name)
chinese_bert = ChineseBertForMaskedLM.from_pretrained(pretrained_model_name)
text = "北京是[MASK]国的首都。"
inputs = tokenizer(text, return_tensors="pt")
print(inputs)
maskpos = 4
with torch.no_grad():
o = chinese_bert(**inputs)
value, index = o.logits.softmax(-1)[0, maskpos].topk(10)
pred_tokens = tokenizer.convert_ids_to_tokens(index.tolist())
pred_values = value.tolist()
outputs = []
for t, p in zip(pred_tokens, pred_values):
outputs.append(f"{t}|{round(p,4)}")
print(outputs)
# base ['中|0.711', '我|0.2488', '祖|0.016', '法|0.0057', '美|0.0048', '全|0.0042', '韩|0.0015', '英|0.0011', '两|0.0008', '王|0.0006']
# large ['中|0.8341', '我|0.1479', '祖|0.0157', '全|0.0007', '国|0.0005', '帝|0.0001', '该|0.0001', '法|0.0001', '一|0.0001', '咱|0.0001']
```
# Reference
https://github.com/ShannonAI/ChineseBert
|
responsibility-framing/predict-perception-bert-blame-concept | 4c220a1ce5a014008cab969c0b2462d66871c639 | 2022-03-10T15:54:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-blame-concept | 338 | null | transformers | 2,758 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-blame-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-blame-concept
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7359
- Rmse: 0.6962
- Rmse Blame::a Un concetto astratto o un'emozione: 0.6962
- Mae: 0.5010
- Mae Blame::a Un concetto astratto o un'emozione: 0.5010
- R2: 0.3974
- R2 Blame::a Un concetto astratto o un'emozione: 0.3974
- Cos: 0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5507
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:-------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0979 | 1.0 | 15 | 1.2387 | 0.9033 | 0.9033 | 0.6603 | 0.6603 | -0.0144 | -0.0144 | 0.0435 | 0.0 | 0.5 | 0.3432 | nan |
| 1.0172 | 2.0 | 30 | 1.1498 | 0.8703 | 0.8703 | 0.5964 | 0.5964 | 0.0584 | 0.0584 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.9879 | 3.0 | 45 | 1.2139 | 0.8942 | 0.8942 | 0.6197 | 0.6197 | 0.0060 | 0.0060 | 0.2174 | 0.0 | 0.5 | 0.4582 | nan |
| 0.9723 | 4.0 | 60 | 1.1152 | 0.8571 | 0.8571 | 0.5982 | 0.5982 | 0.0867 | 0.0867 | 0.2174 | 0.0 | 0.5 | 0.3921 | nan |
| 0.9584 | 5.0 | 75 | 1.0607 | 0.8358 | 0.8358 | 0.5959 | 0.5959 | 0.1314 | 0.1314 | 0.0435 | 0.0 | 0.5 | 0.4165 | nan |
| 0.9023 | 6.0 | 90 | 1.0031 | 0.8128 | 0.8128 | 0.5827 | 0.5827 | 0.1786 | 0.1786 | -0.0435 | 0.0 | 0.5 | 0.3862 | nan |
| 0.8745 | 7.0 | 105 | 0.9715 | 0.7999 | 0.7999 | 0.5796 | 0.5796 | 0.2044 | 0.2044 | 0.3043 | 0.0 | 0.5 | 0.3665 | nan |
| 0.8082 | 8.0 | 120 | 0.8984 | 0.7692 | 0.7692 | 0.5699 | 0.5699 | 0.2643 | 0.2643 | 0.1304 | 0.0 | 0.5 | 0.3390 | nan |
| 0.7475 | 9.0 | 135 | 0.8532 | 0.7497 | 0.7497 | 0.5849 | 0.5849 | 0.3013 | 0.3013 | 0.0435 | 0.0 | 0.5 | 0.3100 | nan |
| 0.6599 | 10.0 | 150 | 0.8737 | 0.7586 | 0.7586 | 0.5822 | 0.5822 | 0.2846 | 0.2846 | 0.3043 | 0.0 | 0.5 | 0.3830 | nan |
| 0.5867 | 11.0 | 165 | 0.8159 | 0.7331 | 0.7331 | 0.5752 | 0.5752 | 0.3318 | 0.3318 | 0.2174 | 0.0 | 0.5 | 0.4439 | nan |
| 0.5081 | 12.0 | 180 | 0.8367 | 0.7424 | 0.7424 | 0.6071 | 0.6071 | 0.3148 | 0.3148 | 0.0435 | 0.0 | 0.5 | 0.3561 | nan |
| 0.4801 | 13.0 | 195 | 0.8353 | 0.7417 | 0.7417 | 0.5567 | 0.5567 | 0.3160 | 0.3160 | 0.3913 | 0.0 | 0.5 | 0.5850 | nan |
| 0.3714 | 14.0 | 210 | 0.8050 | 0.7282 | 0.7282 | 0.5824 | 0.5824 | 0.3408 | 0.3408 | 0.1304 | 0.0 | 0.5 | 0.3975 | nan |
| 0.3306 | 15.0 | 225 | 0.7833 | 0.7183 | 0.7183 | 0.5570 | 0.5570 | 0.3585 | 0.3585 | 0.2174 | 0.0 | 0.5 | 0.4604 | nan |
| 0.2674 | 16.0 | 240 | 0.8148 | 0.7326 | 0.7326 | 0.5475 | 0.5475 | 0.3328 | 0.3328 | 0.3043 | 0.0 | 0.5 | 0.4891 | nan |
| 0.2129 | 17.0 | 255 | 0.8715 | 0.7576 | 0.7576 | 0.5537 | 0.5537 | 0.2863 | 0.2863 | 0.4783 | 0.0 | 0.5 | 0.5017 | nan |
| 0.1924 | 18.0 | 270 | 0.7944 | 0.7234 | 0.7234 | 0.5276 | 0.5276 | 0.3495 | 0.3495 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.1984 | 19.0 | 285 | 0.7885 | 0.7207 | 0.7207 | 0.5208 | 0.5208 | 0.3543 | 0.3543 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1623 | 20.0 | 300 | 0.7682 | 0.7113 | 0.7113 | 0.5132 | 0.5132 | 0.3709 | 0.3709 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.1409 | 21.0 | 315 | 0.7653 | 0.7100 | 0.7100 | 0.5215 | 0.5215 | 0.3733 | 0.3733 | 0.3043 | 0.0 | 0.5 | 0.5415 | nan |
| 0.1386 | 22.0 | 330 | 0.7688 | 0.7116 | 0.7116 | 0.5124 | 0.5124 | 0.3704 | 0.3704 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.123 | 23.0 | 345 | 0.7756 | 0.7148 | 0.7148 | 0.5144 | 0.5144 | 0.3648 | 0.3648 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1175 | 24.0 | 360 | 0.7423 | 0.6993 | 0.6993 | 0.5015 | 0.5015 | 0.3921 | 0.3921 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1188 | 25.0 | 375 | 0.7255 | 0.6913 | 0.6913 | 0.5063 | 0.5063 | 0.4059 | 0.4059 | 0.2174 | 0.0 | 0.5 | 0.4604 | nan |
| 0.1155 | 26.0 | 390 | 0.7635 | 0.7091 | 0.7091 | 0.5083 | 0.5083 | 0.3748 | 0.3748 | 0.4783 | 0.0 | 0.5 | 0.5797 | nan |
| 0.0981 | 27.0 | 405 | 0.7128 | 0.6852 | 0.6852 | 0.5020 | 0.5020 | 0.4163 | 0.4163 | 0.3043 | 0.0 | 0.5 | 0.5415 | nan |
| 0.1109 | 28.0 | 420 | 0.7430 | 0.6996 | 0.6996 | 0.5023 | 0.5023 | 0.3915 | 0.3915 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.1081 | 29.0 | 435 | 0.7367 | 0.6966 | 0.6966 | 0.5007 | 0.5007 | 0.3967 | 0.3967 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
| 0.0953 | 30.0 | 450 | 0.7359 | 0.6962 | 0.6962 | 0.5010 | 0.5010 | 0.3974 | 0.3974 | 0.3913 | 0.0 | 0.5 | 0.5507 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-focus-assassin | 5ecde3fcef2d5225231b7e1933a3835f9e044696 | 2022-03-10T16:13:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-focus-assassin | 338 | null | transformers | 2,759 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-focus-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-focus-assassin
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2964
- Rmse: 0.8992
- Rmse Focus::a Sull'assassino: 0.8992
- Mae: 0.7331
- Mae Focus::a Sull'assassino: 0.7331
- R2: 0.6500
- R2 Focus::a Sull'assassino: 0.6500
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6131
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0674 | 1.0 | 15 | 0.9851 | 1.6393 | 1.6393 | 1.5316 | 1.5316 | -0.1633 | -0.1633 | 0.1304 | 0.0 | 0.5 | 0.2457 | nan |
| 1.0099 | 2.0 | 30 | 0.8921 | 1.5601 | 1.5601 | 1.4317 | 1.4317 | -0.0535 | -0.0535 | 0.5652 | 0.0 | 0.5 | 0.4734 | nan |
| 0.9295 | 3.0 | 45 | 0.7345 | 1.4155 | 1.4155 | 1.3113 | 1.3113 | 0.1327 | 0.1327 | 0.5652 | 0.0 | 0.5 | 0.3596 | nan |
| 0.8485 | 4.0 | 60 | 0.7282 | 1.4094 | 1.4094 | 1.2678 | 1.2678 | 0.1401 | 0.1401 | 0.7391 | 0.0 | 0.5 | 0.5367 | nan |
| 0.7551 | 5.0 | 75 | 0.5966 | 1.2758 | 1.2758 | 1.1144 | 1.1144 | 0.2955 | 0.2955 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan |
| 0.5563 | 6.0 | 90 | 0.4578 | 1.1175 | 1.1175 | 0.9105 | 0.9105 | 0.4594 | 0.4594 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan |
| 0.4048 | 7.0 | 105 | 0.3539 | 0.9826 | 0.9826 | 0.7770 | 0.7770 | 0.5821 | 0.5821 | 0.6522 | 0.0 | 0.5 | 0.5522 | nan |
| 0.3319 | 8.0 | 120 | 0.2938 | 0.8953 | 0.8953 | 0.7110 | 0.7110 | 0.6530 | 0.6530 | 0.6522 | 0.0 | 0.5 | 0.6021 | nan |
| 0.2224 | 9.0 | 135 | 0.3455 | 0.9708 | 0.9708 | 0.7607 | 0.7607 | 0.5921 | 0.5921 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan |
| 0.1794 | 10.0 | 150 | 0.2719 | 0.8612 | 0.8612 | 0.6768 | 0.6768 | 0.6790 | 0.6790 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1553 | 11.0 | 165 | 0.2855 | 0.8826 | 0.8826 | 0.7053 | 0.7053 | 0.6628 | 0.6628 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1008 | 12.0 | 180 | 0.3000 | 0.9046 | 0.9046 | 0.7255 | 0.7255 | 0.6458 | 0.6458 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.1121 | 13.0 | 195 | 0.2817 | 0.8766 | 0.8766 | 0.7236 | 0.7236 | 0.6674 | 0.6674 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.08 | 14.0 | 210 | 0.3504 | 0.9777 | 0.9777 | 0.7631 | 0.7631 | 0.5863 | 0.5863 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0802 | 15.0 | 225 | 0.3031 | 0.9094 | 0.9094 | 0.7565 | 0.7565 | 0.6420 | 0.6420 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0685 | 16.0 | 240 | 0.3041 | 0.9109 | 0.9109 | 0.7409 | 0.7409 | 0.6408 | 0.6408 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0592 | 17.0 | 255 | 0.3496 | 0.9767 | 0.9767 | 0.7812 | 0.7812 | 0.5871 | 0.5871 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0625 | 18.0 | 270 | 0.3260 | 0.9430 | 0.9430 | 0.7757 | 0.7757 | 0.6151 | 0.6151 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0589 | 19.0 | 285 | 0.3118 | 0.9222 | 0.9222 | 0.7442 | 0.7442 | 0.6318 | 0.6318 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0518 | 20.0 | 300 | 0.3062 | 0.9140 | 0.9140 | 0.7459 | 0.7459 | 0.6384 | 0.6384 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0456 | 21.0 | 315 | 0.3200 | 0.9344 | 0.9344 | 0.7592 | 0.7592 | 0.6221 | 0.6221 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0477 | 22.0 | 330 | 0.3132 | 0.9244 | 0.9244 | 0.7532 | 0.7532 | 0.6301 | 0.6301 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0448 | 23.0 | 345 | 0.3006 | 0.9056 | 0.9056 | 0.7321 | 0.7321 | 0.6450 | 0.6450 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.0494 | 24.0 | 360 | 0.2985 | 0.9024 | 0.9024 | 0.7463 | 0.7463 | 0.6475 | 0.6475 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0369 | 25.0 | 375 | 0.3039 | 0.9105 | 0.9105 | 0.7359 | 0.7359 | 0.6412 | 0.6412 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0456 | 26.0 | 390 | 0.2989 | 0.9030 | 0.9030 | 0.7210 | 0.7210 | 0.6471 | 0.6471 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.044 | 27.0 | 405 | 0.2997 | 0.9042 | 0.9042 | 0.7418 | 0.7418 | 0.6461 | 0.6461 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0352 | 28.0 | 420 | 0.2970 | 0.9001 | 0.9001 | 0.7346 | 0.7346 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0429 | 29.0 | 435 | 0.2970 | 0.9001 | 0.9001 | 0.7281 | 0.7281 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0378 | 30.0 | 450 | 0.2964 | 0.8992 | 0.8992 | 0.7331 | 0.7331 | 0.6500 | 0.6500 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-bert-focus-victim | 266e4dae74ce684b66d1c33767e12c08af74f0df | 2022-03-10T16:18:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-bert-focus-victim | 338 | null | transformers | 2,760 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bert-focus-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-focus-victim
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2466
- Rmse: 0.6201
- Rmse Focus::a Sulla vittima: 0.6201
- Mae: 0.4936
- Mae Focus::a Sulla vittima: 0.4936
- R2: 0.7293
- R2 Focus::a Sulla vittima: 0.7293
- Cos: 0.8261
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.8155
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sulla vittima | Mae | Mae Focus::a Sulla vittima | R2 | R2 Focus::a Sulla vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0247 | 1.0 | 15 | 1.0286 | 1.2665 | 1.2665 | 1.0280 | 1.0280 | -0.1292 | -0.1292 | 0.1304 | 0.0 | 0.5 | 0.3685 | nan |
| 0.9912 | 2.0 | 30 | 1.0039 | 1.2512 | 1.2512 | 1.0347 | 1.0347 | -0.1020 | -0.1020 | 0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.9147 | 3.0 | 45 | 0.9338 | 1.2067 | 1.2067 | 0.9770 | 0.9770 | -0.0251 | -0.0251 | 0.1304 | 0.0 | 0.5 | 0.3685 | nan |
| 0.8194 | 4.0 | 60 | 0.7641 | 1.0916 | 1.0916 | 0.8476 | 0.8476 | 0.1612 | 0.1612 | 0.4783 | 0.0 | 0.5 | 0.5284 | nan |
| 0.6636 | 5.0 | 75 | 0.6618 | 1.0159 | 1.0159 | 0.8012 | 0.8012 | 0.2735 | 0.2735 | 0.6522 | 0.0 | 0.5 | 0.4741 | nan |
| 0.523 | 6.0 | 90 | 0.5176 | 0.8984 | 0.8984 | 0.7044 | 0.7044 | 0.4318 | 0.4318 | 0.6522 | 0.0 | 0.5 | 0.4741 | nan |
| 0.402 | 7.0 | 105 | 0.3804 | 0.7702 | 0.7702 | 0.6042 | 0.6042 | 0.5824 | 0.5824 | 0.6522 | 0.0 | 0.5 | 0.5395 | nan |
| 0.3401 | 8.0 | 120 | 0.3594 | 0.7487 | 0.7487 | 0.5703 | 0.5703 | 0.6054 | 0.6054 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.2615 | 9.0 | 135 | 0.3429 | 0.7312 | 0.7312 | 0.6049 | 0.6049 | 0.6236 | 0.6236 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1928 | 10.0 | 150 | 0.2889 | 0.6712 | 0.6712 | 0.5487 | 0.5487 | 0.6828 | 0.6828 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1703 | 11.0 | 165 | 0.2675 | 0.6458 | 0.6458 | 0.5188 | 0.5188 | 0.7064 | 0.7064 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1209 | 12.0 | 180 | 0.2826 | 0.6639 | 0.6639 | 0.5475 | 0.5475 | 0.6897 | 0.6897 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1428 | 13.0 | 195 | 0.2978 | 0.6815 | 0.6815 | 0.5777 | 0.5777 | 0.6731 | 0.6731 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.1038 | 14.0 | 210 | 0.2924 | 0.6753 | 0.6753 | 0.5865 | 0.5865 | 0.6790 | 0.6790 | 0.6522 | 0.0 | 0.5 | 0.2760 | nan |
| 0.0951 | 15.0 | 225 | 0.2905 | 0.6731 | 0.6731 | 0.5750 | 0.5750 | 0.6811 | 0.6811 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0809 | 16.0 | 240 | 0.2676 | 0.6460 | 0.6460 | 0.5552 | 0.5552 | 0.7062 | 0.7062 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0811 | 17.0 | 255 | 0.2770 | 0.6572 | 0.6572 | 0.5543 | 0.5543 | 0.6959 | 0.6959 | 0.7391 | 0.0 | 0.5 | 0.6920 | nan |
| 0.0703 | 18.0 | 270 | 0.2634 | 0.6409 | 0.6409 | 0.5251 | 0.5251 | 0.7108 | 0.7108 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0595 | 19.0 | 285 | 0.2638 | 0.6413 | 0.6413 | 0.5196 | 0.5196 | 0.7104 | 0.7104 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0651 | 20.0 | 300 | 0.2520 | 0.6268 | 0.6268 | 0.4970 | 0.4970 | 0.7234 | 0.7234 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0637 | 21.0 | 315 | 0.2668 | 0.6451 | 0.6451 | 0.4965 | 0.4965 | 0.7071 | 0.7071 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0582 | 22.0 | 330 | 0.2455 | 0.6188 | 0.6188 | 0.4759 | 0.4759 | 0.7305 | 0.7305 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0616 | 23.0 | 345 | 0.2509 | 0.6255 | 0.6255 | 0.5084 | 0.5084 | 0.7246 | 0.7246 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0492 | 24.0 | 360 | 0.2510 | 0.6256 | 0.6256 | 0.4985 | 0.4985 | 0.7244 | 0.7244 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0504 | 25.0 | 375 | 0.2512 | 0.6259 | 0.6259 | 0.4849 | 0.4849 | 0.7242 | 0.7242 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0501 | 26.0 | 390 | 0.2585 | 0.6350 | 0.6350 | 0.5140 | 0.5140 | 0.7162 | 0.7162 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0411 | 27.0 | 405 | 0.2544 | 0.6299 | 0.6299 | 0.5148 | 0.5148 | 0.7207 | 0.7207 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.044 | 28.0 | 420 | 0.2466 | 0.6201 | 0.6201 | 0.4964 | 0.4964 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.042 | 29.0 | 435 | 0.2466 | 0.6201 | 0.6201 | 0.4836 | 0.4836 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.0446 | 30.0 | 450 | 0.2466 | 0.6201 | 0.6201 | 0.4936 | 0.4936 | 0.7293 | 0.7293 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-concept | e4d9e2ddf0a4f0f3c5418c8eded335ab4292fbd3 | 2022-03-15T22:48:25.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-blame-concept | 338 | null | transformers | 2,761 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9414
- Rmse: 0.7875
- Rmse Blame::a Un concetto astratto o un'emozione: 0.7875
- Mae: 0.6165
- Mae Blame::a Un concetto astratto o un'emozione: 0.6165
- R2: 0.2291
- R2 Blame::a Un concetto astratto o un'emozione: 0.2291
- Cos: 0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3509
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0549 | 1.0 | 15 | 1.2093 | 0.8925 | 0.8925 | 0.6659 | 0.6659 | 0.0097 | 0.0097 | -0.3043 | 0.0 | 0.5 | 0.4013 | nan |
| 1.0085 | 2.0 | 30 | 1.2199 | 0.8964 | 0.8964 | 0.6494 | 0.6494 | 0.0010 | 0.0010 | -0.1304 | 0.0 | 0.5 | 0.4515 | nan |
| 1.0131 | 3.0 | 45 | 1.1798 | 0.8815 | 0.8815 | 0.6412 | 0.6412 | 0.0339 | 0.0339 | -0.2174 | 0.0 | 0.5 | 0.2402 | nan |
| 0.9931 | 4.0 | 60 | 1.1726 | 0.8788 | 0.8788 | 0.6370 | 0.6370 | 0.0397 | 0.0397 | -0.1304 | 0.0 | 0.5 | 0.2911 | nan |
| 0.9668 | 5.0 | 75 | 1.1194 | 0.8587 | 0.8587 | 0.5925 | 0.5925 | 0.0833 | 0.0833 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.8759 | 6.0 | 90 | 1.0776 | 0.8425 | 0.8425 | 0.6265 | 0.6265 | 0.1175 | 0.1175 | 0.3043 | 0.0 | 0.5 | 0.4190 | nan |
| 0.8787 | 7.0 | 105 | 1.0513 | 0.8321 | 0.8321 | 0.6087 | 0.6087 | 0.1391 | 0.1391 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.7637 | 8.0 | 120 | 1.0537 | 0.8331 | 0.8331 | 0.6265 | 0.6265 | 0.1372 | 0.1372 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.6568 | 9.0 | 135 | 0.9104 | 0.7744 | 0.7744 | 0.5887 | 0.5887 | 0.2544 | 0.2544 | 0.3043 | 0.0 | 0.5 | 0.3680 | nan |
| 0.6354 | 10.0 | 150 | 0.9055 | 0.7723 | 0.7723 | 0.6222 | 0.6222 | 0.2585 | 0.2585 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.5107 | 11.0 | 165 | 1.0173 | 0.8186 | 0.8186 | 0.6168 | 0.6168 | 0.1669 | 0.1669 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.4598 | 12.0 | 180 | 0.9155 | 0.7765 | 0.7765 | 0.6284 | 0.6284 | 0.2503 | 0.2503 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3815 | 13.0 | 195 | 0.9255 | 0.7808 | 0.7808 | 0.6140 | 0.6140 | 0.2421 | 0.2421 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3303 | 14.0 | 210 | 0.8506 | 0.7485 | 0.7485 | 0.6076 | 0.6076 | 0.3035 | 0.3035 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2799 | 15.0 | 225 | 1.0272 | 0.8226 | 0.8226 | 0.6699 | 0.6699 | 0.1588 | 0.1588 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2998 | 16.0 | 240 | 0.9969 | 0.8103 | 0.8103 | 0.6461 | 0.6461 | 0.1836 | 0.1836 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.3131 | 17.0 | 255 | 0.9066 | 0.7727 | 0.7727 | 0.5849 | 0.5849 | 0.2576 | 0.2576 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2234 | 18.0 | 270 | 0.8741 | 0.7588 | 0.7588 | 0.5953 | 0.5953 | 0.2842 | 0.2842 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2481 | 19.0 | 285 | 1.0022 | 0.8125 | 0.8125 | 0.6549 | 0.6549 | 0.1793 | 0.1793 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2333 | 20.0 | 300 | 0.9238 | 0.7801 | 0.7801 | 0.6180 | 0.6180 | 0.2435 | 0.2435 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2407 | 21.0 | 315 | 0.9868 | 0.8062 | 0.8062 | 0.6457 | 0.6457 | 0.1919 | 0.1919 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2122 | 22.0 | 330 | 0.9514 | 0.7916 | 0.7916 | 0.6204 | 0.6204 | 0.2209 | 0.2209 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2162 | 23.0 | 345 | 0.9227 | 0.7796 | 0.7796 | 0.6053 | 0.6053 | 0.2444 | 0.2444 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.1739 | 24.0 | 360 | 0.9147 | 0.7762 | 0.7762 | 0.5979 | 0.5979 | 0.2510 | 0.2510 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.2084 | 25.0 | 375 | 0.9645 | 0.7970 | 0.7970 | 0.6296 | 0.6296 | 0.2102 | 0.2102 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1702 | 26.0 | 390 | 0.9587 | 0.7946 | 0.7946 | 0.6279 | 0.6279 | 0.2149 | 0.2149 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2146 | 27.0 | 405 | 0.9519 | 0.7918 | 0.7918 | 0.6273 | 0.6273 | 0.2205 | 0.2205 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1645 | 28.0 | 420 | 0.9398 | 0.7868 | 0.7868 | 0.6181 | 0.6181 | 0.2304 | 0.2304 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2052 | 29.0 | 435 | 0.9492 | 0.7907 | 0.7907 | 0.6228 | 0.6228 | 0.2227 | 0.2227 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.147 | 30.0 | 450 | 0.9414 | 0.7875 | 0.7875 | 0.6165 | 0.6165 | 0.2291 | 0.2291 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-cause-object | db802aff3b742db628b57e62c49cc1610005b981 | 2022-03-15T23:03:02.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-cause-object | 338 | null | transformers | 2,762 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3069
- Rmse: 0.8927
- Rmse Cause::a Causata da un oggetto (es. una pistola): 0.8927
- Mae: 0.5854
- Mae Cause::a Causata da un oggetto (es. una pistola): 0.5854
- R2: 0.5410
- R2 Cause::a Causata da un oggetto (es. una pistola): 0.5410
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6177
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un oggetto (es. una pistola) | Mae | Mae Cause::a Causata da un oggetto (es. una pistola) | R2 | R2 Cause::a Causata da un oggetto (es. una pistola) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------:|:------:|:----------------------------------------------------:|:-------:|:---------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0329 | 1.0 | 15 | 0.8168 | 1.4564 | 1.4564 | 1.2947 | 1.2947 | -0.2216 | -0.2216 | -0.5652 | 0.0 | 0.5 | 0.5993 | nan |
| 1.0096 | 2.0 | 30 | 0.7432 | 1.3893 | 1.3893 | 1.1883 | 1.1883 | -0.1116 | -0.1116 | -0.3913 | 0.0 | 0.5 | 0.6499 | nan |
| 0.9323 | 3.0 | 45 | 0.6879 | 1.3366 | 1.3366 | 1.1054 | 1.1054 | -0.0289 | -0.0289 | -0.1304 | 0.0 | 0.5 | 0.5471 | nan |
| 0.8636 | 4.0 | 60 | 0.6378 | 1.2870 | 1.2870 | 1.0477 | 1.0477 | 0.0461 | 0.0461 | 0.2174 | 0.0 | 0.5 | 0.3007 | nan |
| 0.8041 | 5.0 | 75 | 0.5494 | 1.1945 | 1.1945 | 0.9499 | 0.9499 | 0.1783 | 0.1783 | 0.6522 | 0.0 | 0.5 | 0.6695 | nan |
| 0.7413 | 6.0 | 90 | 0.5526 | 1.1980 | 1.1980 | 0.9503 | 0.9503 | 0.1735 | 0.1735 | 0.5652 | 0.0 | 0.5 | 0.3898 | nan |
| 0.6397 | 7.0 | 105 | 0.4726 | 1.1078 | 1.1078 | 0.7826 | 0.7826 | 0.2932 | 0.2932 | 0.5652 | 0.0 | 0.5 | 0.3257 | nan |
| 0.5556 | 8.0 | 120 | 0.7728 | 1.4167 | 1.4167 | 1.1528 | 1.1528 | -0.1558 | -0.1558 | 0.1304 | 0.0 | 0.5 | 0.4027 | nan |
| 0.4972 | 9.0 | 135 | 0.4375 | 1.0659 | 1.0659 | 0.7577 | 0.7577 | 0.3457 | 0.3457 | 0.5652 | 0.0 | 0.5 | 0.5683 | nan |
| 0.3691 | 10.0 | 150 | 0.4990 | 1.1383 | 1.1383 | 0.8272 | 0.8272 | 0.2537 | 0.2537 | 0.4783 | 0.0 | 0.5 | 0.4781 | nan |
| 0.3381 | 11.0 | 165 | 0.4401 | 1.0690 | 1.0690 | 0.7319 | 0.7319 | 0.3418 | 0.3418 | 0.5652 | 0.0 | 0.5 | 0.5683 | nan |
| 0.2966 | 12.0 | 180 | 0.4794 | 1.1158 | 1.1158 | 0.7835 | 0.7835 | 0.2830 | 0.2830 | 0.5652 | 0.0 | 0.5 | 0.5683 | nan |
| 0.2324 | 13.0 | 195 | 0.4013 | 1.0208 | 1.0208 | 0.6873 | 0.6873 | 0.3998 | 0.3998 | 0.4783 | 0.0 | 0.5 | 0.5796 | nan |
| 0.1848 | 14.0 | 210 | 0.4305 | 1.0574 | 1.0574 | 0.7372 | 0.7372 | 0.3561 | 0.3561 | 0.4783 | 0.0 | 0.5 | 0.5796 | nan |
| 0.1621 | 15.0 | 225 | 0.3652 | 0.9738 | 0.9738 | 0.6164 | 0.6164 | 0.4538 | 0.4538 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.1762 | 16.0 | 240 | 0.3335 | 0.9307 | 0.9307 | 0.6458 | 0.6458 | 0.5012 | 0.5012 | 0.4783 | 0.0 | 0.5 | 0.5796 | nan |
| 0.1404 | 17.0 | 255 | 0.3420 | 0.9424 | 0.9424 | 0.6599 | 0.6599 | 0.4886 | 0.4886 | 0.3913 | 0.0 | 0.5 | 0.5831 | nan |
| 0.1379 | 18.0 | 270 | 0.2853 | 0.8608 | 0.8608 | 0.6063 | 0.6063 | 0.5733 | 0.5733 | 0.3913 | 0.0 | 0.5 | 0.5831 | nan |
| 0.1322 | 19.0 | 285 | 0.3261 | 0.9203 | 0.9203 | 0.6548 | 0.6548 | 0.5123 | 0.5123 | 0.4783 | 0.0 | 0.5 | 0.5796 | nan |
| 0.1067 | 20.0 | 300 | 0.3328 | 0.9296 | 0.9296 | 0.5535 | 0.5535 | 0.5023 | 0.5023 | 0.6522 | 0.0 | 0.5 | 0.6695 | nan |
| 0.1038 | 21.0 | 315 | 0.3066 | 0.8924 | 0.8924 | 0.6266 | 0.6266 | 0.5414 | 0.5414 | 0.4783 | 0.0 | 0.5 | 0.5796 | nan |
| 0.094 | 22.0 | 330 | 0.2924 | 0.8714 | 0.8714 | 0.5792 | 0.5792 | 0.5626 | 0.5626 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.1078 | 23.0 | 345 | 0.3161 | 0.9060 | 0.9060 | 0.6022 | 0.6022 | 0.5272 | 0.5272 | 0.3913 | 0.0 | 0.5 | 0.5831 | nan |
| 0.0976 | 24.0 | 360 | 0.3118 | 0.8998 | 0.8998 | 0.6011 | 0.6011 | 0.5337 | 0.5337 | 0.3913 | 0.0 | 0.5 | 0.5831 | nan |
| 0.0911 | 25.0 | 375 | 0.3123 | 0.9005 | 0.9005 | 0.5811 | 0.5811 | 0.5330 | 0.5330 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.1039 | 26.0 | 390 | 0.3122 | 0.9005 | 0.9005 | 0.5956 | 0.5956 | 0.5330 | 0.5330 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.0775 | 27.0 | 405 | 0.3191 | 0.9103 | 0.9103 | 0.6124 | 0.6124 | 0.5228 | 0.5228 | 0.3913 | 0.0 | 0.5 | 0.5831 | nan |
| 0.0789 | 28.0 | 420 | 0.3135 | 0.9023 | 0.9023 | 0.5825 | 0.5825 | 0.5311 | 0.5311 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.0778 | 29.0 | 435 | 0.3075 | 0.8936 | 0.8936 | 0.5837 | 0.5837 | 0.5401 | 0.5401 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
| 0.082 | 30.0 | 450 | 0.3069 | 0.8927 | 0.8927 | 0.5854 | 0.5854 | 0.5410 | 0.5410 | 0.4783 | 0.0 | 0.5 | 0.6177 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-focus-assassin | 076712f7401ffc0f87cf5f2ce9cf9e7620e777a9 | 2022-03-15T23:13:17.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-focus-assassin | 338 | null | transformers | 2,763 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-assassin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3264
- Rmse: 0.9437
- Rmse Focus::a Sull'assassino: 0.9437
- Mae: 0.7093
- Mae Focus::a Sull'assassino: 0.7093
- R2: 0.6145
- R2 Focus::a Sull'assassino: 0.6145
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6131
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0403 | 1.0 | 15 | 1.1576 | 1.7771 | 1.7771 | 1.6028 | 1.6028 | -0.3670 | -0.3670 | -0.2174 | 0.0 | 0.5 | 0.2379 | nan |
| 0.9818 | 2.0 | 30 | 0.8916 | 1.5596 | 1.5596 | 1.4136 | 1.4136 | -0.0529 | -0.0529 | 0.3913 | 0.0 | 0.5 | 0.3793 | nan |
| 0.9276 | 3.0 | 45 | 0.9277 | 1.5909 | 1.5909 | 1.4560 | 1.4560 | -0.0955 | -0.0955 | 0.3913 | 0.0 | 0.5 | 0.3742 | nan |
| 0.8395 | 4.0 | 60 | 0.7958 | 1.4734 | 1.4734 | 1.3032 | 1.3032 | 0.0603 | 0.0603 | 0.5652 | 0.0 | 0.5 | 0.4598 | nan |
| 0.7587 | 5.0 | 75 | 0.4647 | 1.1259 | 1.1259 | 0.9316 | 0.9316 | 0.4513 | 0.4513 | 0.6522 | 0.0 | 0.5 | 0.5087 | nan |
| 0.696 | 6.0 | 90 | 0.5368 | 1.2101 | 1.2101 | 1.0847 | 1.0847 | 0.3661 | 0.3661 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.548 | 7.0 | 105 | 0.3110 | 0.9211 | 0.9211 | 0.7896 | 0.7896 | 0.6328 | 0.6328 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.4371 | 8.0 | 120 | 0.3392 | 0.9619 | 0.9619 | 0.8132 | 0.8132 | 0.5995 | 0.5995 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.355 | 9.0 | 135 | 0.3938 | 1.0366 | 1.0366 | 0.8153 | 0.8153 | 0.5349 | 0.5349 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2919 | 10.0 | 150 | 0.3484 | 0.9749 | 0.9749 | 0.7487 | 0.7487 | 0.5886 | 0.5886 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2595 | 11.0 | 165 | 0.2812 | 0.8759 | 0.8759 | 0.6265 | 0.6265 | 0.6679 | 0.6679 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2368 | 12.0 | 180 | 0.2534 | 0.8314 | 0.8314 | 0.6402 | 0.6402 | 0.7008 | 0.7008 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.227 | 13.0 | 195 | 0.2878 | 0.8861 | 0.8861 | 0.6769 | 0.6769 | 0.6601 | 0.6601 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1979 | 14.0 | 210 | 0.2405 | 0.8100 | 0.8100 | 0.6113 | 0.6113 | 0.7160 | 0.7160 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1622 | 15.0 | 225 | 0.2575 | 0.8382 | 0.8382 | 0.6017 | 0.6017 | 0.6959 | 0.6959 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1575 | 16.0 | 240 | 0.2945 | 0.8963 | 0.8963 | 0.6741 | 0.6741 | 0.6523 | 0.6523 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1479 | 17.0 | 255 | 0.3563 | 0.9859 | 0.9859 | 0.7367 | 0.7367 | 0.5792 | 0.5792 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1269 | 18.0 | 270 | 0.2806 | 0.8750 | 0.8750 | 0.6665 | 0.6665 | 0.6686 | 0.6686 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1257 | 19.0 | 285 | 0.3267 | 0.9441 | 0.9441 | 0.6739 | 0.6739 | 0.6142 | 0.6142 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.134 | 20.0 | 300 | 0.3780 | 1.0155 | 1.0155 | 0.7331 | 0.7331 | 0.5536 | 0.5536 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.1171 | 21.0 | 315 | 0.3890 | 1.0301 | 1.0301 | 0.7444 | 0.7444 | 0.5406 | 0.5406 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0934 | 22.0 | 330 | 0.3131 | 0.9242 | 0.9242 | 0.6923 | 0.6923 | 0.6303 | 0.6303 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1112 | 23.0 | 345 | 0.2912 | 0.8913 | 0.8913 | 0.6610 | 0.6610 | 0.6561 | 0.6561 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1038 | 24.0 | 360 | 0.3109 | 0.9209 | 0.9209 | 0.7019 | 0.7019 | 0.6329 | 0.6329 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.085 | 25.0 | 375 | 0.3469 | 0.9728 | 0.9728 | 0.7383 | 0.7383 | 0.5904 | 0.5904 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0843 | 26.0 | 390 | 0.3017 | 0.9073 | 0.9073 | 0.6848 | 0.6848 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.093 | 27.0 | 405 | 0.3269 | 0.9443 | 0.9443 | 0.7042 | 0.7042 | 0.6140 | 0.6140 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0846 | 28.0 | 420 | 0.3161 | 0.9286 | 0.9286 | 0.6937 | 0.6937 | 0.6267 | 0.6267 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0764 | 29.0 | 435 | 0.3244 | 0.9408 | 0.9408 | 0.7079 | 0.7079 | 0.6169 | 0.6169 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0697 | 30.0 | 450 | 0.3264 | 0.9437 | 0.9437 | 0.7093 | 0.7093 | 0.6145 | 0.6145 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-focus-victim | 5cee482f552e9e64cd276c7c359402156be04d05 | 2022-03-15T23:18:48.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-focus-victim | 338 | null | transformers | 2,764 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-victim
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Rmse: 0.6301
- Rmse Focus::a Sulla vittima: 0.6301
- Mae: 0.5441
- Mae Focus::a Sulla vittima: 0.5441
- R2: 0.7205
- R2 Focus::a Sulla vittima: 0.7205
- Cos: 0.8261
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.7802
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sulla vittima | Mae | Mae Focus::a Sulla vittima | R2 | R2 Focus::a Sulla vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0607 | 1.0 | 15 | 0.9261 | 1.2017 | 1.2017 | 0.9557 | 0.9557 | -0.0166 | -0.0166 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 1.0107 | 2.0 | 30 | 0.9481 | 1.2159 | 1.2159 | 0.9861 | 0.9861 | -0.0408 | -0.0408 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 0.9921 | 3.0 | 45 | 0.9068 | 1.1892 | 1.1892 | 0.9548 | 0.9548 | 0.0045 | 0.0045 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan |
| 0.7769 | 4.0 | 60 | 0.5014 | 0.8842 | 0.8842 | 0.7121 | 0.7121 | 0.4496 | 0.4496 | 0.7391 | 0.0 | 0.5 | 0.6232 | nan |
| 0.5763 | 5.0 | 75 | 0.4019 | 0.7917 | 0.7917 | 0.6737 | 0.6737 | 0.5588 | 0.5588 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan |
| 0.4378 | 6.0 | 90 | 0.3594 | 0.7486 | 0.7486 | 0.5957 | 0.5957 | 0.6055 | 0.6055 | 0.7391 | 0.0 | 0.5 | 0.4442 | nan |
| 0.3595 | 7.0 | 105 | 0.3452 | 0.7337 | 0.7337 | 0.6333 | 0.6333 | 0.6210 | 0.6210 | 0.5652 | 0.0 | 0.5 | 0.2649 | nan |
| 0.3192 | 8.0 | 120 | 0.3275 | 0.7147 | 0.7147 | 0.6205 | 0.6205 | 0.6405 | 0.6405 | 0.7391 | 0.0 | 0.5 | 0.6561 | nan |
| 0.2482 | 9.0 | 135 | 0.2978 | 0.6815 | 0.6815 | 0.5754 | 0.5754 | 0.6731 | 0.6731 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan |
| 0.2416 | 10.0 | 150 | 0.3018 | 0.6860 | 0.6860 | 0.5954 | 0.5954 | 0.6687 | 0.6687 | 0.5652 | 0.0 | 0.5 | 0.2553 | nan |
| 0.2292 | 11.0 | 165 | 0.2764 | 0.6565 | 0.6565 | 0.5522 | 0.5522 | 0.6966 | 0.6966 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.1752 | 12.0 | 180 | 0.3070 | 0.6920 | 0.6920 | 0.5680 | 0.5680 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan |
| 0.1956 | 13.0 | 195 | 0.2923 | 0.6752 | 0.6752 | 0.5499 | 0.5499 | 0.6791 | 0.6791 | 0.8261 | 0.0 | 0.5 | 0.7843 | nan |
| 0.1424 | 14.0 | 210 | 0.3163 | 0.7023 | 0.7023 | 0.6060 | 0.6060 | 0.6528 | 0.6528 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.152 | 15.0 | 225 | 0.2436 | 0.6164 | 0.6164 | 0.5127 | 0.5127 | 0.7326 | 0.7326 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.1277 | 16.0 | 240 | 0.2471 | 0.6208 | 0.6208 | 0.5367 | 0.5367 | 0.7287 | 0.7287 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1269 | 17.0 | 255 | 0.2573 | 0.6334 | 0.6334 | 0.5329 | 0.5329 | 0.7175 | 0.7175 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1058 | 18.0 | 270 | 0.2538 | 0.6291 | 0.6291 | 0.5530 | 0.5530 | 0.7214 | 0.7214 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.107 | 19.0 | 285 | 0.2568 | 0.6328 | 0.6328 | 0.5464 | 0.5464 | 0.7181 | 0.7181 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.1185 | 20.0 | 300 | 0.2452 | 0.6183 | 0.6183 | 0.5317 | 0.5317 | 0.7309 | 0.7309 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.1029 | 21.0 | 315 | 0.2419 | 0.6142 | 0.6142 | 0.5415 | 0.5415 | 0.7344 | 0.7344 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan |
| 0.0908 | 22.0 | 330 | 0.2462 | 0.6196 | 0.6196 | 0.5261 | 0.5261 | 0.7297 | 0.7297 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0901 | 23.0 | 345 | 0.2528 | 0.6279 | 0.6279 | 0.5330 | 0.5330 | 0.7225 | 0.7225 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0979 | 24.0 | 360 | 0.2800 | 0.6607 | 0.6607 | 0.5682 | 0.5682 | 0.6927 | 0.6927 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan |
| 0.0992 | 25.0 | 375 | 0.2502 | 0.6246 | 0.6246 | 0.5517 | 0.5517 | 0.7254 | 0.7254 | 0.6522 | 0.0 | 0.5 | 0.2372 | nan |
| 0.0846 | 26.0 | 390 | 0.2570 | 0.6331 | 0.6331 | 0.5524 | 0.5524 | 0.7178 | 0.7178 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0717 | 27.0 | 405 | 0.2562 | 0.6321 | 0.6321 | 0.5456 | 0.5456 | 0.7187 | 0.7187 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0739 | 28.0 | 420 | 0.2570 | 0.6330 | 0.6330 | 0.5471 | 0.5471 | 0.7179 | 0.7179 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.0828 | 29.0 | 435 | 0.2553 | 0.6309 | 0.6309 | 0.5446 | 0.5446 | 0.7198 | 0.7198 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
| 0.086 | 30.0 | 450 | 0.2546 | 0.6301 | 0.6301 | 0.5441 | 0.5441 | 0.7205 | 0.7205 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-focus-object | b72d193b4a253f8156ae1c8d2657b948575d7839 | 2022-03-15T23:23:19.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-focus-object | 338 | null | transformers | 2,765 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
- Rmse: 0.5495
- Rmse Focus::a Su un oggetto: 0.5495
- Mae: 0.4174
- Mae Focus::a Su un oggetto: 0.4174
- R2: 0.5721
- R2 Focus::a Su un oggetto: 0.5721
- Cos: 0.5652
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5518
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0316 | 1.0 | 15 | 0.6428 | 1.0035 | 1.0035 | 0.8806 | 0.8806 | -0.4272 | -0.4272 | -0.4783 | 0.0 | 0.5 | 0.5302 | nan |
| 1.0005 | 2.0 | 30 | 0.4564 | 0.8456 | 0.8456 | 0.7078 | 0.7078 | -0.0134 | -0.0134 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.9519 | 3.0 | 45 | 0.4151 | 0.8063 | 0.8063 | 0.6797 | 0.6797 | 0.0784 | 0.0784 | 0.1304 | 0.0 | 0.5 | 0.4888 | nan |
| 0.92 | 4.0 | 60 | 0.3982 | 0.7898 | 0.7898 | 0.6516 | 0.6516 | 0.1159 | 0.1159 | 0.2174 | 0.0 | 0.5 | 0.5036 | nan |
| 0.8454 | 5.0 | 75 | 0.2739 | 0.6550 | 0.6550 | 0.5292 | 0.5292 | 0.3919 | 0.3919 | 0.6522 | 0.0 | 0.5 | 0.4160 | nan |
| 0.7247 | 6.0 | 90 | 0.2413 | 0.6148 | 0.6148 | 0.5347 | 0.5347 | 0.4642 | 0.4642 | 0.4783 | 0.0 | 0.5 | 0.3453 | nan |
| 0.6055 | 7.0 | 105 | 0.3109 | 0.6978 | 0.6978 | 0.6115 | 0.6115 | 0.3098 | 0.3098 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.5411 | 8.0 | 120 | 0.3932 | 0.7848 | 0.7848 | 0.6712 | 0.6712 | 0.1271 | 0.1271 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.4784 | 9.0 | 135 | 0.1316 | 0.4540 | 0.4540 | 0.3750 | 0.3750 | 0.7079 | 0.7079 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.4039 | 10.0 | 150 | 0.2219 | 0.5896 | 0.5896 | 0.4954 | 0.4954 | 0.5074 | 0.5074 | 0.5652 | 0.0 | 0.5 | 0.4838 | nan |
| 0.3415 | 11.0 | 165 | 0.1935 | 0.5505 | 0.5505 | 0.4443 | 0.4443 | 0.5704 | 0.5704 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3369 | 12.0 | 180 | 0.2118 | 0.5761 | 0.5761 | 0.4554 | 0.4554 | 0.5296 | 0.5296 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3083 | 13.0 | 195 | 0.1928 | 0.5496 | 0.5496 | 0.4368 | 0.4368 | 0.5718 | 0.5718 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2678 | 14.0 | 210 | 0.2205 | 0.5877 | 0.5877 | 0.4472 | 0.4472 | 0.5105 | 0.5105 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2199 | 15.0 | 225 | 0.2118 | 0.5760 | 0.5760 | 0.4689 | 0.4689 | 0.5297 | 0.5297 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2238 | 16.0 | 240 | 0.2461 | 0.6209 | 0.6209 | 0.5047 | 0.5047 | 0.4537 | 0.4537 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2233 | 17.0 | 255 | 0.2307 | 0.6011 | 0.6011 | 0.4618 | 0.4618 | 0.4879 | 0.4879 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1903 | 18.0 | 270 | 0.2207 | 0.5880 | 0.5880 | 0.4432 | 0.4432 | 0.5100 | 0.5100 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1714 | 19.0 | 285 | 0.2146 | 0.5798 | 0.5798 | 0.4368 | 0.4368 | 0.5236 | 0.5236 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1759 | 20.0 | 300 | 0.1745 | 0.5228 | 0.5228 | 0.4152 | 0.4152 | 0.6126 | 0.6126 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1505 | 21.0 | 315 | 0.1944 | 0.5519 | 0.5519 | 0.4170 | 0.4170 | 0.5684 | 0.5684 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1467 | 22.0 | 330 | 0.1802 | 0.5313 | 0.5313 | 0.3910 | 0.3910 | 0.5999 | 0.5999 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1441 | 23.0 | 345 | 0.2360 | 0.6081 | 0.6081 | 0.4755 | 0.4755 | 0.4760 | 0.4760 | 0.4783 | 0.0 | 0.5 | 0.4938 | nan |
| 0.1553 | 24.0 | 360 | 0.2129 | 0.5774 | 0.5774 | 0.4539 | 0.4539 | 0.5274 | 0.5274 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1163 | 25.0 | 375 | 0.1780 | 0.5281 | 0.5281 | 0.3952 | 0.3952 | 0.6048 | 0.6048 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1266 | 26.0 | 390 | 0.2163 | 0.5821 | 0.5821 | 0.4569 | 0.4569 | 0.5198 | 0.5198 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1416 | 27.0 | 405 | 0.1829 | 0.5352 | 0.5352 | 0.4082 | 0.4082 | 0.5939 | 0.5939 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1576 | 28.0 | 420 | 0.1930 | 0.5498 | 0.5498 | 0.4126 | 0.4126 | 0.5716 | 0.5716 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.118 | 29.0 | 435 | 0.2070 | 0.5694 | 0.5694 | 0.4378 | 0.4378 | 0.5405 | 0.5405 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1179 | 30.0 | 450 | 0.1927 | 0.5495 | 0.5495 | 0.4174 | 0.4174 | 0.5721 | 0.5721 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-focus-concept | 827ebfcdcd6554bd8f121fe801625ad175d726e8 | 2022-03-15T23:28:40.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-focus-concept | 338 | null | transformers | 2,766 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8296
- Rmse: 1.0302
- Rmse Focus::a Su un concetto astratto o un'emozione: 1.0302
- Mae: 0.7515
- Mae Focus::a Su un concetto astratto o un'emozione: 0.7515
- R2: 0.1804
- R2 Focus::a Su un concetto astratto o un'emozione: 0.1804
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3415
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un concetto astratto o un'emozione | Mae | Mae Focus::a Su un concetto astratto o un'emozione | R2 | R2 Focus::a Su un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------:|:------:|:--------------------------------------------------:|:-------:|:-------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0355 | 1.0 | 15 | 0.9822 | 1.1209 | 1.1209 | 0.9649 | 0.9649 | 0.0296 | 0.0296 | 0.2174 | 0.0 | 0.5 | 0.3706 | nan |
| 1.0083 | 2.0 | 30 | 1.1378 | 1.2065 | 1.2065 | 0.9954 | 0.9954 | -0.1241 | -0.1241 | 0.2174 | 0.0 | 0.5 | 0.3309 | nan |
| 0.9823 | 3.0 | 45 | 0.9669 | 1.1121 | 1.1121 | 0.9315 | 0.9315 | 0.0448 | 0.0448 | 0.3043 | 0.0 | 0.5 | 0.3810 | nan |
| 0.9468 | 4.0 | 60 | 0.8856 | 1.0644 | 1.0644 | 0.8584 | 0.8584 | 0.1251 | 0.1251 | 0.3913 | 0.0 | 0.5 | 0.3803 | nan |
| 0.9294 | 5.0 | 75 | 0.8136 | 1.0202 | 1.0202 | 0.8396 | 0.8396 | 0.1963 | 0.1963 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.881 | 6.0 | 90 | 0.7634 | 0.9882 | 0.9882 | 0.8192 | 0.8192 | 0.2458 | 0.2458 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.7589 | 7.0 | 105 | 0.8139 | 1.0204 | 1.0204 | 0.8136 | 0.8136 | 0.1960 | 0.1960 | 0.5652 | 0.0 | 0.5 | 0.4120 | nan |
| 0.7217 | 8.0 | 120 | 0.9105 | 1.0792 | 1.0792 | 0.9394 | 0.9394 | 0.1005 | 0.1005 | 0.3913 | 0.0 | 0.5 | 0.4108 | nan |
| 0.8059 | 9.0 | 135 | 1.0322 | 1.1491 | 1.1491 | 0.9115 | 0.9115 | -0.0197 | -0.0197 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan |
| 0.6483 | 10.0 | 150 | 0.7989 | 1.0109 | 1.0109 | 0.7899 | 0.7899 | 0.2108 | 0.2108 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.5725 | 11.0 | 165 | 0.7175 | 0.9581 | 0.9581 | 0.7011 | 0.7011 | 0.2912 | 0.2912 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan |
| 0.5091 | 12.0 | 180 | 0.8818 | 1.0621 | 1.0621 | 0.8775 | 0.8775 | 0.1289 | 0.1289 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan |
| 0.4526 | 13.0 | 195 | 0.8451 | 1.0398 | 1.0398 | 0.7990 | 0.7990 | 0.1651 | 0.1651 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan |
| 0.361 | 14.0 | 210 | 0.8632 | 1.0508 | 1.0508 | 0.8124 | 0.8124 | 0.1472 | 0.1472 | 0.4783 | 0.0 | 0.5 | 0.3699 | nan |
| 0.3582 | 15.0 | 225 | 0.8461 | 1.0404 | 1.0404 | 0.7923 | 0.7923 | 0.1641 | 0.1641 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan |
| 0.2945 | 16.0 | 240 | 0.9142 | 1.0814 | 1.0814 | 0.8125 | 0.8125 | 0.0968 | 0.0968 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan |
| 0.2891 | 17.0 | 255 | 0.8377 | 1.0352 | 1.0352 | 0.7718 | 0.7718 | 0.1724 | 0.1724 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2569 | 18.0 | 270 | 0.8106 | 1.0183 | 1.0183 | 0.7481 | 0.7481 | 0.1992 | 0.1992 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2583 | 19.0 | 285 | 0.8239 | 1.0266 | 1.0266 | 0.7597 | 0.7597 | 0.1861 | 0.1861 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2217 | 20.0 | 300 | 0.8485 | 1.0419 | 1.0419 | 0.7663 | 0.7663 | 0.1617 | 0.1617 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1927 | 21.0 | 315 | 0.8304 | 1.0307 | 1.0307 | 0.7536 | 0.7536 | 0.1797 | 0.1797 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.176 | 22.0 | 330 | 0.8321 | 1.0317 | 1.0317 | 0.7539 | 0.7539 | 0.1780 | 0.1780 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1639 | 23.0 | 345 | 0.7914 | 1.0062 | 1.0062 | 0.7460 | 0.7460 | 0.2182 | 0.2182 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.177 | 24.0 | 360 | 0.8619 | 1.0500 | 1.0500 | 0.7725 | 0.7725 | 0.1486 | 0.1486 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1473 | 25.0 | 375 | 0.8101 | 1.0180 | 1.0180 | 0.7587 | 0.7587 | 0.1997 | 0.1997 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.181 | 26.0 | 390 | 0.8038 | 1.0141 | 1.0141 | 0.7433 | 0.7433 | 0.2059 | 0.2059 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1679 | 27.0 | 405 | 0.7982 | 1.0105 | 1.0105 | 0.7248 | 0.7248 | 0.2115 | 0.2115 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1529 | 28.0 | 420 | 0.8282 | 1.0293 | 1.0293 | 0.7454 | 0.7454 | 0.1818 | 0.1818 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1822 | 29.0 | 435 | 0.8310 | 1.0311 | 1.0311 | 0.7512 | 0.7512 | 0.1790 | 0.1790 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1442 | 30.0 | 450 | 0.8296 | 1.0302 | 1.0302 | 0.7515 | 0.7515 | 0.1804 | 0.1804 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
voidful/phoneme_byt5_v2 | 34cef46e3ebad280b220d73c2155f445a0b16b78 | 2022-06-04T12:09:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | voidful | null | voidful/phoneme_byt5_v2 | 338 | null | transformers | 2,767 | Entry not found |
HansAnonymous/DialoGPT-medium-rick | afd8cbee4a08b246cbedfe94d2ffd0fd65db7428 | 2021-08-28T23:56:07.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | HansAnonymous | null | HansAnonymous/DialoGPT-medium-rick | 337 | 1 | transformers | 2,768 | ---
tags:
- conversational
---
# Rick from Rick & Morty DialoGPT Model |
Pollawat/mt5-small-thai-qa-qg | dee95725d581dbe7c3e92d3103f71372ab3d0af6 | 2021-04-19T14:52:22.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"thai",
"th",
"dataset:NSC2018",
"dataset:iapp-wiki-qa-dataset",
"dataset:XQuAD",
"transformers",
"question-generation",
"question-answering",
"license:mit",
"autotrain_compatible"
] | question-answering | false | Pollawat | null | Pollawat/mt5-small-thai-qa-qg | 337 | 3 | transformers | 2,769 | ---
tags:
- question-generation
- question-answering
language:
- thai
- th
datasets:
- NSC2018
- iapp-wiki-qa-dataset
- XQuAD
license: mit
---
[Google's mT5](https://github.com/google-research/multilingual-t5)
This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus
```python
from transformers import MT5Tokenizer, MT5ForConditionalGeneration
tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg")
model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg")
text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน"
input_ids = tokenizer.encode(text, return_tensors='pt')
beam_output = model.generate(
input_ids,
max_length=50,
num_beams=5,
early_stopping=True
)
print(tokenizer.decode(beam_output[0]))
>> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s>
print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
>> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี
``` |
yellowback/gpt-neo-japanese-1.3B | 69add767a2591d8d1d5445077e7656f453da19de | 2021-12-09T08:59:05.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"ja",
"dataset:oscar",
"dataset:cc100",
"dataset:wikipedia",
"transformers",
"text generation",
"causal-lm",
"japanese",
"license:apache-2.0"
] | text-generation | false | yellowback | null | yellowback/gpt-neo-japanese-1.3B | 337 | 1 | transformers | 2,770 | ---
language:
- ja
tags:
- text generation
- pytorch
- causal-lm
- japanese
license: apache-2.0
datasets:
- oscar
- cc100
- wikipedia
---
# GPT-Neo 1.3B pre-trained model for Japanese
## Model Description
GPT2/GPT3 like model trained on Japanese.corpus.
## Training data
- cc100 ja
- oscar ja
- wikipedia ja
## How to use
```
from transformers import pipeline
>>> generator = pipeline('text-generation', model='yellowback/gpt-neo-japanese-1.3B')
>>> generator("こんばんは、徳川家康です。", do_sample=True, max_length=50, num_return_sequences=3)
[{'generated_text': 'こんばんは、徳川家康です。 世の中を見渡してみても、残念なことだけれども、まぎれもなく「世のなか...\n5月になりました、家康です。 ゴールデンウィークも終ってしまい、世間では'},
{'generated_text': 'こんばんは、徳川家康です。さあ今日は昨晩から降り続いた雨は上がりましたが、まだまだ雨脚は強いですが、晴れるところは晴れて欲しいですね。昨日の夜は仕事だったので、今日の夕'},
{'generated_text': 'こんばんは、徳川家康です。 今回は、『世界史再考──日本史再考』という本を書いたあと、『世界史再考──日本史再考』の6~8章'}]
```
|
responsibility-framing/predict-perception-xlmr-blame-victim | 56f59614188fc99750b29eee3788d944563810dc | 2022-03-15T22:38:23.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-blame-victim | 337 | null | transformers | 2,771 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-victim
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1098
- Rmse: 0.6801
- Rmse Blame::a La vittima: 0.6801
- Mae: 0.5617
- Mae Blame::a La vittima: 0.5617
- R2: -1.5910
- R2 Blame::a La vittima: -1.5910
- Cos: -0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3333
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0422 | 1.0 | 15 | 0.4952 | 0.4542 | 0.4542 | 0.4095 | 0.4095 | -0.1560 | -0.1560 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0434 | 2.0 | 30 | 0.4851 | 0.4496 | 0.4496 | 0.4054 | 0.4054 | -0.1324 | -0.1324 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.038 | 3.0 | 45 | 0.4513 | 0.4337 | 0.4337 | 0.3885 | 0.3885 | -0.0536 | -0.0536 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0151 | 4.0 | 60 | 0.4395 | 0.4280 | 0.4280 | 0.3840 | 0.3840 | -0.0262 | -0.0262 | -0.1304 | 0.0 | 0.5 | 0.2715 | nan |
| 0.9727 | 5.0 | 75 | 0.4490 | 0.4325 | 0.4325 | 0.3811 | 0.3811 | -0.0482 | -0.0482 | 0.2174 | 0.0 | 0.5 | 0.3338 | nan |
| 0.9733 | 6.0 | 90 | 0.4540 | 0.4349 | 0.4349 | 0.3860 | 0.3860 | -0.0598 | -0.0598 | -0.2174 | 0.0 | 0.5 | 0.3248 | nan |
| 0.9396 | 7.0 | 105 | 0.4501 | 0.4331 | 0.4331 | 0.3849 | 0.3849 | -0.0508 | -0.0508 | 0.0435 | 0.0 | 0.5 | 0.2609 | nan |
| 0.8759 | 8.0 | 120 | 0.4597 | 0.4377 | 0.4377 | 0.3849 | 0.3849 | -0.0731 | -0.0731 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.8768 | 9.0 | 135 | 0.4575 | 0.4366 | 0.4366 | 0.3784 | 0.3784 | -0.0680 | -0.0680 | 0.4783 | 0.0 | 0.5 | 0.4615 | nan |
| 0.8312 | 10.0 | 150 | 0.5363 | 0.4727 | 0.4727 | 0.4071 | 0.4071 | -0.2520 | -0.2520 | -0.0435 | 0.0 | 0.5 | 0.2733 | nan |
| 0.7296 | 11.0 | 165 | 0.5291 | 0.4696 | 0.4696 | 0.4057 | 0.4057 | -0.2353 | -0.2353 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.7941 | 12.0 | 180 | 0.5319 | 0.4708 | 0.4708 | 0.4047 | 0.4047 | -0.2417 | -0.2417 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6486 | 13.0 | 195 | 0.6787 | 0.5318 | 0.5318 | 0.4516 | 0.4516 | -0.5846 | -0.5846 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6241 | 14.0 | 210 | 1.0146 | 0.6502 | 0.6502 | 0.5580 | 0.5580 | -1.3687 | -1.3687 | -0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.5868 | 15.0 | 225 | 0.7164 | 0.5464 | 0.5464 | 0.4682 | 0.4682 | -0.6725 | -0.6725 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5305 | 16.0 | 240 | 0.9064 | 0.6146 | 0.6146 | 0.5173 | 0.5173 | -1.1161 | -1.1161 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.495 | 17.0 | 255 | 1.3860 | 0.7600 | 0.7600 | 0.6433 | 0.6433 | -2.2358 | -2.2358 | -0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.566 | 18.0 | 270 | 0.7618 | 0.5634 | 0.5634 | 0.4730 | 0.4730 | -0.7785 | -0.7785 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.4305 | 19.0 | 285 | 0.8849 | 0.6072 | 0.6072 | 0.5048 | 0.5048 | -1.0659 | -1.0659 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5108 | 20.0 | 300 | 0.7376 | 0.5544 | 0.5544 | 0.4716 | 0.4716 | -0.7220 | -0.7220 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.44 | 21.0 | 315 | 1.1611 | 0.6956 | 0.6956 | 0.5921 | 0.5921 | -1.7108 | -1.7108 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.395 | 22.0 | 330 | 1.3004 | 0.7361 | 0.7361 | 0.6078 | 0.6078 | -2.0360 | -2.0360 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3945 | 23.0 | 345 | 0.9376 | 0.6251 | 0.6251 | 0.5272 | 0.5272 | -1.1890 | -1.1890 | -0.2174 | 0.0 | 0.5 | 0.3188 | nan |
| 0.3093 | 24.0 | 360 | 1.3586 | 0.7524 | 0.7524 | 0.6219 | 0.6219 | -2.1719 | -2.1719 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.2676 | 25.0 | 375 | 1.2200 | 0.7130 | 0.7130 | 0.5994 | 0.5994 | -1.8484 | -1.8484 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3257 | 26.0 | 390 | 1.2235 | 0.7140 | 0.7140 | 0.5900 | 0.5900 | -1.8564 | -1.8564 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.4004 | 27.0 | 405 | 1.0978 | 0.6763 | 0.6763 | 0.5624 | 0.5624 | -1.5629 | -1.5629 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.283 | 28.0 | 420 | 1.1454 | 0.6909 | 0.6909 | 0.5697 | 0.5697 | -1.6742 | -1.6742 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3326 | 29.0 | 435 | 1.1214 | 0.6836 | 0.6836 | 0.5646 | 0.5646 | -1.6181 | -1.6181 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.2632 | 30.0 | 450 | 1.1098 | 0.6801 | 0.6801 | 0.5617 | 0.5617 | -1.5910 | -1.5910 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-assassin | 1cbab05ed0503d4326afd8a62b4432c122b2fd34 | 2022-03-15T22:32:51.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-blame-assassin | 337 | null | transformers | 2,772 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-assassin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4439
- Rmse: 0.9571
- Rmse Blame::a L'assassino: 0.9571
- Mae: 0.7260
- Mae Blame::a L'assassino: 0.7260
- R2: 0.6437
- R2 Blame::a L'assassino: 0.6437
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6287
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0317 | 1.0 | 15 | 1.1311 | 1.5278 | 1.5278 | 1.3893 | 1.3893 | 0.0919 | 0.0919 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan |
| 0.9475 | 2.0 | 30 | 1.0795 | 1.4926 | 1.4926 | 1.3387 | 1.3387 | 0.1334 | 0.1334 | 0.8261 | 0.0 | 0.5 | 0.6184 | nan |
| 0.9146 | 3.0 | 45 | 1.1092 | 1.5130 | 1.5130 | 1.4078 | 1.4078 | 0.1095 | 0.1095 | 0.4783 | 0.0 | 0.5 | 0.3116 | nan |
| 0.9539 | 4.0 | 60 | 1.1734 | 1.5561 | 1.5561 | 1.4238 | 1.4238 | 0.0580 | 0.0580 | 0.3913 | 0.0 | 0.5 | 0.3614 | nan |
| 0.8665 | 5.0 | 75 | 0.8910 | 1.3560 | 1.3560 | 1.2350 | 1.2350 | 0.2847 | 0.2847 | 0.5652 | 0.0 | 0.5 | 0.4136 | nan |
| 0.6564 | 6.0 | 90 | 0.8469 | 1.3220 | 1.3220 | 1.1570 | 1.1570 | 0.3201 | 0.3201 | 0.3913 | 0.0 | 0.5 | 0.3931 | nan |
| 0.5241 | 7.0 | 105 | 0.6429 | 1.1519 | 1.1519 | 0.9757 | 0.9757 | 0.4838 | 0.4838 | 0.5652 | 0.0 | 0.5 | 0.4222 | nan |
| 0.4589 | 8.0 | 120 | 0.5781 | 1.0923 | 1.0923 | 0.8714 | 0.8714 | 0.5359 | 0.5359 | 0.6522 | 0.0 | 0.5 | 0.4641 | nan |
| 0.4043 | 9.0 | 135 | 0.4525 | 0.9664 | 0.9664 | 0.8257 | 0.8257 | 0.6367 | 0.6367 | 0.5652 | 0.0 | 0.5 | 0.4263 | nan |
| 0.3498 | 10.0 | 150 | 0.4490 | 0.9627 | 0.9627 | 0.8272 | 0.8272 | 0.6395 | 0.6395 | 0.6522 | 0.0 | 0.5 | 0.5144 | nan |
| 0.3505 | 11.0 | 165 | 0.3721 | 0.8763 | 0.8763 | 0.7471 | 0.7471 | 0.7013 | 0.7013 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.3426 | 12.0 | 180 | 0.4117 | 0.9218 | 0.9218 | 0.7477 | 0.7477 | 0.6695 | 0.6695 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.3074 | 13.0 | 195 | 0.3761 | 0.8810 | 0.8810 | 0.7109 | 0.7109 | 0.6981 | 0.6981 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2261 | 14.0 | 210 | 0.3818 | 0.8877 | 0.8877 | 0.7042 | 0.7042 | 0.6935 | 0.6935 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2399 | 15.0 | 225 | 0.3893 | 0.8964 | 0.8964 | 0.7108 | 0.7108 | 0.6874 | 0.6874 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2014 | 16.0 | 240 | 0.4606 | 0.9750 | 0.9750 | 0.8046 | 0.8046 | 0.6302 | 0.6302 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1937 | 17.0 | 255 | 0.4549 | 0.9689 | 0.9689 | 0.7679 | 0.7679 | 0.6348 | 0.6348 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1831 | 18.0 | 270 | 0.4113 | 0.9213 | 0.9213 | 0.6746 | 0.6746 | 0.6698 | 0.6698 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1758 | 19.0 | 285 | 0.4154 | 0.9259 | 0.9259 | 0.7053 | 0.7053 | 0.6665 | 0.6665 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1577 | 20.0 | 300 | 0.3970 | 0.9051 | 0.9051 | 0.7163 | 0.7163 | 0.6813 | 0.6813 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1597 | 21.0 | 315 | 0.4199 | 0.9309 | 0.9309 | 0.7270 | 0.7270 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1145 | 22.0 | 330 | 0.4250 | 0.9365 | 0.9365 | 0.6971 | 0.6971 | 0.6588 | 0.6588 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan |
| 0.1349 | 23.0 | 345 | 0.4168 | 0.9275 | 0.9275 | 0.7126 | 0.7126 | 0.6654 | 0.6654 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1481 | 24.0 | 360 | 0.4421 | 0.9552 | 0.9552 | 0.7441 | 0.7441 | 0.6451 | 0.6451 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1188 | 25.0 | 375 | 0.4356 | 0.9481 | 0.9481 | 0.7444 | 0.7444 | 0.6503 | 0.6503 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1119 | 26.0 | 390 | 0.4456 | 0.9590 | 0.9590 | 0.7139 | 0.7139 | 0.6422 | 0.6422 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1282 | 27.0 | 405 | 0.4456 | 0.9589 | 0.9589 | 0.7637 | 0.7637 | 0.6423 | 0.6423 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.142 | 28.0 | 420 | 0.4501 | 0.9637 | 0.9637 | 0.7146 | 0.7146 | 0.6387 | 0.6387 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan |
| 0.126 | 29.0 | 435 | 0.4442 | 0.9575 | 0.9575 | 0.7189 | 0.7189 | 0.6433 | 0.6433 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1308 | 30.0 | 450 | 0.4439 | 0.9571 | 0.9571 | 0.7260 | 0.7260 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-cause-none | 24fb5f707b0aea971d7457ae1152b868f6de90b8 | 2022-03-15T23:44:01.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | responsibility-framing | null | responsibility-framing/predict-perception-xlmr-cause-none | 337 | null | transformers | 2,773 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-none
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8639
- Rmse: 1.3661
- Rmse Cause::a Spontanea, priva di un agente scatenante: 1.3661
- Mae: 1.0795
- Mae Cause::a Spontanea, priva di un agente scatenante: 1.0795
- R2: -1.7872
- R2 Cause::a Spontanea, priva di un agente scatenante: -1.7872
- Cos: -0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3501
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Spontanea, priva di un agente scatenante | Mae | Mae Cause::a Spontanea, priva di un agente scatenante | R2 | R2 Cause::a Spontanea, priva di un agente scatenante | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------------:|:------:|:-----------------------------------------------------:|:-------:|:----------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0626 | 1.0 | 15 | 0.6787 | 0.8244 | 0.8244 | 0.7453 | 0.7453 | -0.0149 | -0.0149 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0186 | 2.0 | 30 | 0.6769 | 0.8233 | 0.8233 | 0.7457 | 0.7457 | -0.0122 | -0.0122 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0346 | 3.0 | 45 | 0.6812 | 0.8259 | 0.8259 | 0.7489 | 0.7489 | -0.0187 | -0.0187 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.9481 | 4.0 | 60 | 1.0027 | 1.0020 | 1.0020 | 0.8546 | 0.8546 | -0.4994 | -0.4994 | -0.3043 | 0.0 | 0.5 | 0.2579 | nan |
| 0.8838 | 5.0 | 75 | 0.9352 | 0.9677 | 0.9677 | 0.8463 | 0.8463 | -0.3985 | -0.3985 | -0.2174 | 0.0 | 0.5 | 0.2966 | nan |
| 0.7971 | 6.0 | 90 | 0.9396 | 0.9700 | 0.9700 | 0.8608 | 0.8608 | -0.4050 | -0.4050 | -0.2174 | 0.0 | 0.5 | 0.3156 | nan |
| 0.8182 | 7.0 | 105 | 0.9485 | 0.9746 | 0.9746 | 0.8509 | 0.8509 | -0.4184 | -0.4184 | -0.1304 | 0.0 | 0.5 | 0.2788 | nan |
| 0.696 | 8.0 | 120 | 1.1396 | 1.0682 | 1.0682 | 0.9309 | 0.9309 | -0.7041 | -0.7041 | -0.1304 | 0.0 | 0.5 | 0.2899 | nan |
| 0.6337 | 9.0 | 135 | 1.3064 | 1.1437 | 1.1437 | 0.9612 | 0.9612 | -0.9536 | -0.9536 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5308 | 10.0 | 150 | 1.2403 | 1.1144 | 1.1144 | 0.9359 | 0.9359 | -0.8547 | -0.8547 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5226 | 11.0 | 165 | 1.3433 | 1.1597 | 1.1597 | 0.9542 | 0.9542 | -1.0087 | -1.0087 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.474 | 12.0 | 180 | 1.5321 | 1.2386 | 1.2386 | 1.0340 | 1.0340 | -1.2910 | -1.2910 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.3899 | 13.0 | 195 | 1.6322 | 1.2784 | 1.2784 | 1.0083 | 1.0083 | -1.4408 | -1.4408 | -0.3043 | 0.0 | 0.5 | 0.3590 | nan |
| 0.3937 | 14.0 | 210 | 1.7519 | 1.3244 | 1.3244 | 1.0540 | 1.0540 | -1.6197 | -1.6197 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.4128 | 15.0 | 225 | 1.8588 | 1.3643 | 1.3643 | 1.0765 | 1.0765 | -1.7797 | -1.7797 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3424 | 16.0 | 240 | 1.7211 | 1.3128 | 1.3128 | 1.0217 | 1.0217 | -1.5737 | -1.5737 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3307 | 17.0 | 255 | 1.7802 | 1.3351 | 1.3351 | 1.0790 | 1.0790 | -1.6621 | -1.6621 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.2972 | 18.0 | 270 | 1.5272 | 1.2366 | 1.2366 | 0.9945 | 0.9945 | -1.2837 | -1.2837 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2862 | 19.0 | 285 | 1.7213 | 1.3128 | 1.3128 | 1.0574 | 1.0574 | -1.5740 | -1.5740 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2844 | 20.0 | 300 | 1.8999 | 1.3793 | 1.3793 | 1.0930 | 1.0930 | -1.8411 | -1.8411 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2404 | 21.0 | 315 | 1.9806 | 1.4082 | 1.4082 | 1.1221 | 1.1221 | -1.9617 | -1.9617 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2349 | 22.0 | 330 | 1.8649 | 1.3665 | 1.3665 | 1.0953 | 1.0953 | -1.7888 | -1.7888 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2323 | 23.0 | 345 | 1.8256 | 1.3520 | 1.3520 | 1.0694 | 1.0694 | -1.7299 | -1.7299 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.2217 | 24.0 | 360 | 1.9150 | 1.3847 | 1.3847 | 1.1017 | 1.1017 | -1.8636 | -1.8636 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2262 | 25.0 | 375 | 1.8536 | 1.3624 | 1.3624 | 1.0667 | 1.0667 | -1.7719 | -1.7719 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2052 | 26.0 | 390 | 1.7727 | 1.3323 | 1.3323 | 1.0475 | 1.0475 | -1.6508 | -1.6508 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2121 | 27.0 | 405 | 1.8088 | 1.3458 | 1.3458 | 1.0588 | 1.0588 | -1.7048 | -1.7048 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1723 | 28.0 | 420 | 1.8283 | 1.3530 | 1.3530 | 1.0628 | 1.0628 | -1.7340 | -1.7340 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1932 | 29.0 | 435 | 1.8566 | 1.3635 | 1.3635 | 1.0763 | 1.0763 | -1.7764 | -1.7764 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2157 | 30.0 | 450 | 1.8639 | 1.3661 | 1.3661 | 1.0795 | 1.0795 | -1.7872 | -1.7872 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/angelinacho-stillconor-touchofray | db555c5a91cadb50dde96b50c4315792968b4fea | 2022-07-19T19:52:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/angelinacho-stillconor-touchofray | 337 | null | transformers | 2,774 | ---
language: en
thumbnail: http://www.huggingtweets.com/angelinacho-stillconor-touchofray/1658260354212/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/859423506592808961/VurGQ0Hk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1485398297984389121/DmUfFheN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375088662589939717/nd6wgtKM_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">✨ nacho // 조혜미 ✨ & conor & ray</div>
<div style="text-align: center; font-size: 14px;">@angelinacho-stillconor-touchofray</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ✨ nacho // 조혜미 ✨ & conor & ray.
| Data | ✨ nacho // 조혜미 ✨ | conor | ray |
| --- | --- | --- | --- |
| Tweets downloaded | 3210 | 3250 | 3208 |
| Retweets | 575 | 100 | 1737 |
| Short tweets | 307 | 443 | 246 |
| Tweets kept | 2328 | 2707 | 1225 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q995qld/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @angelinacho-stillconor-touchofray's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37ez663h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37ez663h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/angelinacho-stillconor-touchofray')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
conniezyj/DialoGPT-small-snape | d1ad809cb74a47d89535ef08c356ee40f51898a8 | 2021-09-04T06:17:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | conniezyj | null | conniezyj/DialoGPT-small-snape | 336 | null | transformers | 2,775 | ---
tags:
- conversational
---
# Snape DialoGPT Model
|
facebook/xlm-roberta-xxl | cf077058541d380b377eddd9a4f4c0137e1f6065 | 2022-01-28T16:32:37.000Z | [
"pytorch",
"xlm-roberta-xl",
"fill-mask",
"multilingual",
"arxiv:2105.00572",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | facebook | null | facebook/xlm-roberta-xxl | 336 | 1 | transformers | 2,776 | ---
language: multilingual
license: mit
---
# XLM-RoBERTa-XL (xxlarge-sized model)
XLM-RoBERTa-XL model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa-XL did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa-XL is a extra large multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa-XL model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta-xl) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='facebook/xlm-roberta-xxl')
>>> unmasker("Europe is a <mask> continent.")
[{'score': 0.22996895015239716,
'token': 28811,
'token_str': 'European',
'sequence': 'Europe is a European continent.'},
{'score': 0.14307449758052826,
'token': 21334,
'token_str': 'large',
'sequence': 'Europe is a large continent.'},
{'score': 0.12239163368940353,
'token': 19336,
'token_str': 'small',
'sequence': 'Europe is a small continent.'},
{'score': 0.07025063782930374,
'token': 18410,
'token_str': 'vast',
'sequence': 'Europe is a vast continent.'},
{'score': 0.032869212329387665,
'token': 6957,
'token_str': 'big',
'sequence': 'Europe is a big continent.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('facebook/xlm-roberta-xxl')
model = AutoModelForMaskedLM.from_pretrained("facebook/xlm-roberta-xxl")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-00572,
author = {Naman Goyal and
Jingfei Du and
Myle Ott and
Giri Anantharaman and
Alexis Conneau},
title = {Larger-Scale Transformers for Multilingual Masked Language Modeling},
journal = {CoRR},
volume = {abs/2105.00572},
year = {2021},
url = {https://arxiv.org/abs/2105.00572},
eprinttype = {arXiv},
eprint = {2105.00572},
timestamp = {Wed, 12 May 2021 15:54:31 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-00572.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
liam168/trans-opus-mt-en-zh | 88cd74b4297abb5da53dc8ac95362ced458dd242 | 2021-07-16T04:17:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"zh",
"transformers",
"translation",
"autotrain_compatible"
] | translation | false | liam168 | null | liam168/trans-opus-mt-en-zh | 336 | 4 | transformers | 2,777 | ---
language:
- en
- zh
tags:
- translation
widget:
- text: "I like to study Data Science and Machine Learning."
---
# liam168/trans-opus-mt-en-zh
## Model description
* source group: English
* target group: Chinese
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
## How to use
```python
>>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline
>>> mode_name = 'liam168/trans-opus-mt-en-zh'
>>> model = AutoModelWithLMHead.from_pretrained(mode_name)
>>> tokenizer = AutoTokenizer.from_pretrained(mode_name)
>>> translation = pipeline("translation_en_to_zh", model=model, tokenizer=tokenizer)
>>> translation('I like to study Data Science and Machine Learning.', max_length=400)
[{'translation_text': '我喜欢学习数据科学和机器学习'}]
```
## Contact
[email protected]
|
sentence-transformers/nli-distilbert-base | e6725b7fc96c36e01905f517049ce2f6c0473de9 | 2022-06-15T23:54:49.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-distilbert-base | 336 | null | sentence-transformers | 2,778 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-distilbert-base')
model = AutoModel.from_pretrained('sentence-transformers/nli-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
edbeeching/decision-transformer-gym-hopper-expert | e4b82a76587437ed6bb12380330ddb56b855df94 | 2022-06-29T19:12:17.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
] | reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-hopper-expert | 336 | 6 | transformers | 2,779 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on expert trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673]
std = [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
facebook/wav2vec2-conformer-rel-pos-large-960h-ft | ca7f36f527f234b3cd4f05ecee30361f971e8e33 | 2022-06-15T08:12:40.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-conformer-rel-pos-large-960h-ft | 336 | 2 | transformers | 2,780 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-conformer-rel-pos-large-960h-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.85
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.83
---
# Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
Wav2Vec2-Conformer with relative position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rel-pos-large-960h-ft** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.85 | 3.82 | |
IDEA-CCNL/Wenzhong-GPT2-3.5B | cf234d0e3a6d1e123b7a68ac294ab8d519d0f39e | 2022-04-15T09:05:09.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers",
"license:apache-2.0"
] | text-generation | false | IDEA-CCNL | null | IDEA-CCNL/Wenzhong-GPT2-3.5B | 335 | 2 | transformers | 2,781 | ---
language:
- zh
inference:
parameters:
max_new_tokens: 128
repetition_penalty: 25.0
top_p: 0.9
do_sample: True
license: apache-2.0
---
# Wenzhong-GPT2-3.5B model (chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
As we all know, the single direction language model based on decoder structure has strong generation ability, such as GPT model. **The 3.5 billion parameter Wenzhong-GPT2-3.5B large model, using 100G chinese common data, 32 A100 training for 28 hours,** is the largest open source **GPT2 large model of chinese**. **Our model performs well in Chinese continuation generation.**
## Usage
### load model
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### generation
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Wenzhong-GPT2-3.5B')
generator("北京位于", max_length=30, num_return_sequences=1)
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
clancystudios/DialoGPT-medium-Morty | c5f3723dc18c41a2cf9dca1b2bf1170337b730a9 | 2022-02-07T12:38:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | clancystudios | null | clancystudios/DialoGPT-medium-Morty | 335 | null | transformers | 2,782 | ---
tags:
- conversational
--- |
facebook/vit-mae-large | 8f4b5ad20e1cb9b9d1a1147fb02c9ccd39d2ea15 | 2022-03-29T17:14:04.000Z | [
"pytorch",
"tf",
"vit_mae",
"pretraining",
"dataset:imagenet-1k",
"arxiv:2111.06377",
"transformers",
"vision",
"license:apache-2.0"
] | null | false | facebook | null | facebook/vit-mae-large | 335 | null | transformers | 2,783 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (large-sized model) pre-trained with MAE
Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoFeatureExtractor, ViTMAEForPreTraining
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/vit-mae-large')
model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-large')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-06377,
author = {Kaiming He and
Xinlei Chen and
Saining Xie and
Yanghao Li and
Piotr Doll{\'{a}}r and
Ross B. Girshick},
title = {Masked Autoencoders Are Scalable Vision Learners},
journal = {CoRR},
volume = {abs/2111.06377},
year = {2021},
url = {https://arxiv.org/abs/2111.06377},
eprinttype = {arXiv},
eprint = {2111.06377},
timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
linydub/bart-large-samsum | 5d32c801b99d8605a10ac38ddcaa6a186d81fcae | 2021-09-17T00:55:29.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"en",
"dataset:samsum",
"transformers",
"summarization",
"azureml",
"azure",
"codecarbon",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | linydub | null | linydub/bart-large-samsum | 335 | 6 | transformers | 2,784 | ---
language:
- en
license: apache-2.0
tags:
- summarization
- azureml
- azure
- codecarbon
- bart
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-large-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROGUE-1
type: rouge-1
value: 55.0234
- name: Validation ROGUE-2
type: rouge-2
value: 29.6005
- name: Validation ROGUE-L
type: rouge-L
value: 44.914
- name: Validation ROGUE-Lsum
type: rouge-Lsum
value: 50.464
- name: Test ROGUE-1
type: rouge-1
value: 53.4345
- name: Test ROGUE-2
type: rouge-2
value: 28.7445
- name: Test ROGUE-L
type: rouge-L
value: 44.1848
- name: Test ROGUE-Lsum
type: rouge-Lsum
value: 49.1874
widget:
- text: |
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
---
## `bart-large-samsum`
This model was trained using Microsoft's [`Azure Machine Learning Service`](https://azure.microsoft.com/en-us/services/machine-learning). It was fine-tuned on the [`samsum`](https://huggingface.co/datasets/samsum) corpus from [`facebook/bart-large`](https://huggingface.co/facebook/bart-large) checkpoint.
## Usage (Inference)
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="linydub/bart-large-samsum")
input_text = '''
Henry: Hey, is Nate coming over to watch the movie tonight?
Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet?
Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class.
Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend.
Henry: Nice, I'm really looking forward to seeing them again.
'''
summarizer(input_text)
```
## Fine-tune on AzureML
[](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Flinydub%2Fazureml-greenai-txtsum%2Fmain%2F.cloud%2Ftemplate-hub%2Flinydub%2Farm-bart-large-samsum.json) [](http://armviz.io/#/?load=https://raw.githubusercontent.com/linydub/azureml-greenai-txtsum/main/.cloud/template-hub/linydub/arm-bart-large-samsum.json)
More information about the fine-tuning process (including samples and benchmarks):
**[Preview]** https://github.com/linydub/azureml-greenai-txtsum
## Resource Usage
These results were retrieved from [`Azure Monitor Metrics`](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics). All experiments were ran on AzureML low priority compute clusters.
| Key | Value |
| --- | ----- |
| Region | US West 2 |
| AzureML Compute SKU | STANDARD_ND40RS_V2 |
| Compute SKU GPU Device | 8 x NVIDIA V100 32GB (NVLink) |
| Compute Node Count | 1 |
| Run Duration | 6m 48s |
| Compute Cost (Dedicated/LowPriority) | $2.50 / $0.50 USD |
| Average CPU Utilization | 47.9% |
| Average GPU Utilization | 69.8% |
| Average GPU Memory Usage | 25.71 GB |
| Total GPU Energy Usage | 370.84 kJ |
*Compute cost ($) is estimated from the run duration, number of compute nodes utilized, and SKU's price per hour. Updated SKU pricing could be found [here](https://azure.microsoft.com/en-us/pricing/details/machine-learning).
### Carbon Emissions
These results were obtained using [`CodeCarbon`](https://github.com/mlco2/codecarbon). The carbon emissions are estimated from training runtime only (excl. setup and evaluation runtimes).
| Key | Value |
| --- | ----- |
| timestamp | 2021-09-16T23:54:25 |
| duration | 263.2430217266083 |
| emissions | 0.029715544634717518 |
| energy_consumed | 0.09985062041235725 |
| country_name | USA |
| region | Washington |
| cloud_provider | azure |
| cloud_region | westus2 |
## Hyperparameters
- max_source_length: 512
- max_target_length: 90
- fp16: True
- seed: 1
- per_device_train_batch_size: 16
- per_device_eval_batch_size: 16
- gradient_accumulation_steps: 1
- learning_rate: 5e-5
- num_train_epochs: 3.0
- weight_decay: 0.1
## Results
| ROUGE | Score |
| ----- | ----- |
| eval_rouge1 | 55.0234 |
| eval_rouge2 | 29.6005 |
| eval_rougeL | 44.914 |
| eval_rougeLsum | 50.464 |
| predict_rouge1 | 53.4345 |
| predict_rouge2 | 28.7445 |
| predict_rougeL | 44.1848 |
| predict_rougeLsum | 49.1874 |
| Metric | Value |
| ------ | ----- |
| epoch | 3.0 |
| eval_gen_len | 30.6027 |
| eval_loss | 1.4327096939086914 |
| eval_runtime | 22.9127 |
| eval_samples | 818 |
| eval_samples_per_second | 35.701 |
| eval_steps_per_second | 0.306 |
| predict_gen_len | 30.4835 |
| predict_loss | 1.4501988887786865 |
| predict_runtime | 26.0269 |
| predict_samples | 819 |
| predict_samples_per_second | 31.467 |
| predict_steps_per_second | 0.269 |
| train_loss | 1.2014821151207233 |
| train_runtime | 263.3678 |
| train_samples | 14732 |
| train_samples_per_second | 167.811 |
| train_steps_per_second | 1.321 |
| total_steps | 348 |
| total_flops | 4.26008990669865e+16 |
|
mrm8488/t5-base-finetuned-squadv2 | 58b740046da740a6321ce1ccc221e4a65fc3e934 | 2020-12-11T21:56:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad_v2",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-squadv2 | 335 | 1 | transformers | 2,785 | ---
language: en
datasets:
- squad_v2
---
# T5-base fine-tuned on SQuAD v2
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [SQuAD v2](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
Dataset ID: ```squad_v2``` from [Huggingface/NLP](https://github.com/huggingface/nlp)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| squad_v2 | train | 130319 |
| squad_v2 | valid | 11873 |
How to load it from [nlp](https://github.com/huggingface/nlp)
```python
train_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('squad_v2', split=nlp.Split.VALIDATION)
```
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
## Results 📝
| Metric | # Value |
| ------ | --------- |
| **EM** | **77.64** |
| **F1** | **81.32** |
## Model in Action 🚀
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2")
model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/t5-base-finetuned-squadv2")
def get_answer(question, context):
input_text = "question: %s context: %s" % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google"
question = "Who has supported Manuel?"
get_answer(question, context)
# output: 'HF-Transformers and Google'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nvidia/segformer-b1-finetuned-cityscapes-1024-1024 | f084b5ac89d958e98811b18cf5cae9eb9304250d | 2022-07-20T09:54:04.000Z | [
"pytorch",
"tf",
"segformer",
"dataset:cityscapes",
"arxiv:2105.15203",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
] | image-segmentation | false | nvidia | null | nvidia/segformer-b1-finetuned-cityscapes-1024-1024 | 335 | 2 | transformers | 2,786 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://www.researchgate.net/profile/Anurag-Arnab/publication/315881952/figure/fig5/AS:667673876779033@1536197265755/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.jpg
example_title: Road
---
# SegFormer (b1-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b1-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
trituenhantaoio/bert-base-vietnamese-uncased | b1a91594cd7d15a9e76bf92656ca9b79f8e66505 | 2021-05-20T08:06:49.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | null | false | trituenhantaoio | null | trituenhantaoio/bert-base-vietnamese-uncased | 335 | 2 | transformers | 2,787 | ## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
model = BertForSequenceClassification.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
tokenizer = BertTokenizer.from_pretrained("trituenhantaoio/bert-base-vietnamese-uncased")
```
### References
```
@article{ttnt2020bert,
title={Vietnamese BERT: Pretrained on News and Wiki},
author={trituenhantao.io},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/trituenhantaoio/vn-bert-base-uncased}},
}
```
[trituenhantao.io](https://trituenhantao.io) |
ESPersonnel/DialoGPT-small-got | 467ce93aec63e38b7c93deaec5aa2e677cf0c214 | 2021-08-28T20:16:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | ESPersonnel | null | ESPersonnel/DialoGPT-small-got | 334 | null | transformers | 2,788 | ---
tags:
- conversational
---
# Game of Thrones DialoGPT Model |
huggingtweets/logicaldota2 | 30a432d77bab271f0fad26e8ec29cab36e8c419e | 2021-05-22T12:29:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/logicaldota2 | 334 | null | transformers | 2,789 | ---
language: en
thumbnail: https://www.huggingtweets.com/logicaldota2/1614112538704/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1222935009553723392/JERvOrH1_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">adarsh 🤖 AI Bot </div>
<div style="font-size: 15px">@logicaldota2 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@logicaldota2's tweets](https://twitter.com/logicaldota2).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 309 |
| Retweets | 20 |
| Short tweets | 146 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2u5hbemi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @logicaldota2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mx6u2xh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mx6u2xh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/logicaldota2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rovai/chatbotmedium4 | abe6a511567c09781921746077d904c68c1494a9 | 2021-12-01T16:55:39.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | rovai | null | rovai/chatbotmedium4 | 334 | null | transformers | 2,790 | ---
tags:
- conversational
---
# chatbot4 |
gloomyworm/DialoGPT-medium-ortho | 1a3ab02c3ae664a88b6e3592251f328956f7e628 | 2022-06-14T23:05:27.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | gloomyworm | null | gloomyworm/DialoGPT-medium-ortho | 334 | null | transformers | 2,791 | ---
tags:
- conversational
---
# Ortho DialoGPT Model |
S34NtheGuy/DialoGPT-small-cursedryno | 77bf69d02edce8c0aa232666b2c1bd134fcd653d | 2021-10-10T21:57:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | S34NtheGuy | null | S34NtheGuy/DialoGPT-small-cursedryno | 333 | null | transformers | 2,792 | ---
tags:
- conversational
---
# DialoGPT chat bot model using discord messages as data |
abhisht/DialoGPT-medium-Emilybot | ff92d16cf0e8cfe52c54f8fc5a39b0e8d4d62025 | 2021-09-29T13:01:33.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | abhisht | null | abhisht/DialoGPT-medium-Emilybot | 333 | 1 | transformers | 2,793 | ---
tags:
- conversational
---
# Emilybot DialoGPT Model |
google/tapas-base-finetuned-tabfact | 39f040cbaef2ce4b065392c9f3a22fc80f0e7f64 | 2021-11-29T13:12:54.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
] | text-classification | false | google | null | google/tapas-base-finetuned-tabfact | 333 | null | transformers | 2,794 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS base model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_base`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
ricardo-filho/bert-base-portuguese-cased-nli-assin-2 | 1946af0f5090676d2aaf4774efb123bdb7735bcd | 2021-08-03T19:29:54.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | ricardo-filho | null | ricardo-filho/bert-base-portuguese-cased-nli-assin-2 | 333 | null | sentence-transformers | 2,795 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 407 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 41,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
doc2query/msmarco-chinese-mt5-base-v1 | 50eeb2d317ba2f8c55ed1fb1fac6a9b57d86490c | 2022-04-29T11:47:50.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"zh",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-chinese-mt5-base-v1 | 333 | 1 | transformers | 2,796 | ---
language: zh
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"
license: apache-2.0
---
# doc2query/msmarco-chinese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-chinese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
cocoshe/gpt2-chinese-gen-ads-by-keywords | 0f9c3fa0fb70a96a73bae211de3dc88099d65c3a | 2022-05-11T08:08:23.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | cocoshe | null | cocoshe/gpt2-chinese-gen-ads-by-keywords | 333 | 1 | transformers | 2,797 | ---
license: apache-2.0
---
[千言—AdvertiseGen广告文案生成数据集](https://www.luge.ai/#/luge/dataDetail?id=9)
> 仅支持.bin(pytorch)
在该千言数据集微调了5个epoch,
```python
input_text = '类型#裙*材质#针织*风格#简约*风格#青春*风格#清新*风格#性感*图案#条纹*图案#撞色*裙下摆#开叉*裙长#连衣裙*裙款式#拼接*裙款式#吊带'
output_text = gen_ads(input_text)
output_text = output_text.replace(' ', '')
output_text = output_text[len(input_text):]
output_text
```
输出(实际中注意控制max_length)
```python
output_text='夏天穿的针织衫,搭配简约上衣+牛仔裙,一下子就活泼起来了好吧,就这么简约的蓝色衬托出女性优雅的气质,搭出一派优雅女人味,让人印象深刻哦~好了,今天是秋天来了,天气凉了,是不是该穿上针织呢,秋天会是一个充满阳光的日子呢?让我们一起去看看今天的穿搭吧!首先是白色风衣,其次是棉质风衣。在秋天我们应该穿丝缎或者花边,这种比较清新的风格一定不会让人觉得很成熟,而且又是简约款式,显得自然、有气质。再就是皮草风衣啦,一件白皮草+一件牛仔+两件棉纱的搭配就很潮'
```
|
chanind/frame-semantic-transformer-base | 617c1d96525d1fa56cc04f30e29cc3883bb99125 | 2022-05-24T20:10:35.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | chanind | null | chanind/frame-semantic-transformer-base | 332 | null | transformers | 2,798 | ---
license: apache-2.0
---
Fine-tuned T5 base model for use as a frame semantic parser in the [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer) project. This model is trained on data from [FrameNet 1.7](https://framenet2.icsi.berkeley.edu/).
### Usage
This is meant to be used a part of [Frame Semantic Transformer](https://github.com/chanind/frame-semantic-transformer). See that project for usage instructions.
### Tasks
This model is trained to perform 3 tasks related to semantic frame parsing:
1. Identify frame trigger locations in the text
2. Classify the frame given a trigger location
3. Extract frame elements in the sentence
### Performance
This model is trained and evaluated using the same train/dev/test splits from FrameNet 1.7 annotated corpora as used by [Open Sesame](https://github.com/swabhs/open-sesame).
| Task | F1 Score (Dev) | F1 Score (Test) |
| ---------------------- | -------------- | --------------- |
| Trigger identification | 0.78 | 0.71 |
| Frame Classification | 0.89 | 0.87 |
| Argument Extraction | 0.74 | 0.72 |
|
clampert/multilingual-sentiment-covid19 | eea3f8e26d2828dbf9f0f1d939dd868396ec863c | 2021-12-14T18:57:07.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"multilingual",
"transformers",
"sentiment-analysis",
"license:apache-2.0"
] | text-classification | false | clampert | null | clampert/multilingual-sentiment-covid19 | 331 | 1 | transformers | 2,799 | ---
pipeline_tag: text-classification
language: multilingual
license: apache-2.0
tags:
- "sentiment-analysis"
- "multilingual"
widget:
- text: "I am very happy."
example_title: "English"
- text: "Heute bin ich schlecht drauf."
example_title: "Deutsch"
- text: "Quel cauchemard!"
example_title: "Francais"
- text: "ฉันรักฤดูใบไม้ผลิ"
example_title: "ภาษาไทย"
---
# Multi-lingual sentiment prediction trained from COVID19-related tweets
Repository: [https://github.com/clampert/multilingual-sentiment-analysis/](https://github.com/clampert/multilingual-sentiment-analysis/)
Model trained on a large-scale (18437530 examples) dataset of
multi-lingual tweets that was collected between March 2020
and November 2021 using Twitter’s Streaming API with varying
COVID19-related keywords. Labels were auto-general based on
the presence of positive and negative emoticons. For details
on the dataset, see our IEEE BigData 2021 publication.
Base model is [sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual).
It was finetuned for sequence classification with `positive`
and `negative` labels for two epochs (48 hours on 8xP100 GPUs).
## Citation
If you use our model your work, please cite:
```
@inproceedings{lampert2021overcoming,
title={Overcoming Rare-Language Discrimination in Multi-Lingual Sentiment Analysis},
author={Jasmin Lampert and Christoph H. Lampert},
booktitle={IEEE International Conference on Big Data (BigData)},
year={2021},
note={Special Session: Machine Learning on Big Data},
}
```
Enjoy!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.