modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kevincp560/distilbart-cnn-6-6-finetuned-pubmed | e0ae1c730da838d3fa0746669a63165714b9ec05 | 2022-03-04T17:56:48.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:pub_med_summarization_dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Kevincp560 | null | Kevincp560/distilbart-cnn-6-6-finetuned-pubmed | 16 | null | transformers | 9,300 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pub_med_summarization_dataset
metrics:
- rouge
model-index:
- name: distilbart-cnn-6-6-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: pub_med_summarization_dataset
type: pub_med_summarization_dataset
args: document
metrics:
- name: Rouge1
type: rouge
value: 39.2769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-6-6-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0648
- Rouge1: 39.2769
- Rouge2: 15.876
- Rougel: 24.2306
- Rougelsum: 35.267
- Gen Len: 141.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.2215 | 1.0 | 4000 | 2.0781 | 37.2476 | 14.2852 | 22.6875 | 33.1607 | 141.97 |
| 2.0105 | 2.0 | 8000 | 2.0217 | 37.8038 | 14.7869 | 23.2025 | 33.7069 | 141.918 |
| 1.8331 | 3.0 | 12000 | 2.0243 | 39.0497 | 15.8077 | 24.2237 | 34.9371 | 141.822 |
| 1.6936 | 4.0 | 16000 | 2.0487 | 38.7059 | 15.4364 | 23.8514 | 34.7771 | 141.878 |
| 1.5817 | 5.0 | 20000 | 2.0648 | 39.2769 | 15.876 | 24.2306 | 35.267 | 141.8565 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
sultan/BioM-BERT-PubMed-PMC-Large | fc12fe4acc99d4ee412fcd3fe768b91d851a7ec8 | 2022-03-06T19:39:01.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | sultan | null | sultan/BioM-BERT-PubMed-PMC-Large | 16 | null | transformers | 9,301 | # BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained with ELECTRA implementation of BERT that omit Next Sentence Prediction and introduce Dynamic Masking Loss Function instead of ELECTRA function. Since the model uses ELECTRA implementation of BERT, the architecture of the model in huggingface library is indeed ELECTRA. This model was pre-trained on TPUv3-512 for 690K steps with batch size of 4,192 on both PubMed Abstracts and PMC full article + general domain vocab (EN Wiki + Books). This design choice help this model achieving State-of-the-art on certain Bio Text Classification Tasks such as ChemProt.
. In order to help researchers with limited resources to fine-tune larger models, we created an example with PyTorch XLA. PyTorch XLA (https://github.com/pytorch/xla) is a library that allows you to use PyTorch on TPU units, which is provided for free by Google Colab and Kaggle. Follow this example to work with PyTorch/XLA [Link](https://github.com/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb). In this example we achieve 80.74 micro F1 score on ChemProt task with BioM-ALBERTxxlarge . Fine-tuning takes 43 minutes for 5 epochs .
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints. We also updated this repo with a couple of examples on how to fine-tune LMs on text classification and questions answering tasks such as ChemProt, SQuAD, and BioASQ.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` |
hyechanjun/interview-question-remake | c4bd2f0f8dae1d08a3d4ff5c53ba705a84b575f6 | 2022-03-07T17:57:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:INTERVIEW: NPR Media Dialog Transcripts",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | hyechanjun | null | hyechanjun/interview-question-remake | 16 | null | transformers | 9,302 | ---
datasets:
- "INTERVIEW: NPR Media Dialog Transcripts"
---
# AI Interviewer Question-Asking Model
For a Senior Project at Calvin University
Created by: Hyechan Jun, Ha-Ram Koo, and Advait Scaria
This model is fine-tuned on facebook/bart-base to generate sequences ending in a question mark (?). It is a remake of an earlier model that had errors in its training and validation datasets. |
Chayawat/opus-mt-en-mul-finetuned-en-to-th | b22e798c4b6cb81946d18f49a95c7926b0626979 | 2022-03-11T03:32:13.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Chayawat | null | Chayawat/opus-mt-en-mul-finetuned-en-to-th | 16 | null | transformers | 9,303 | Entry not found |
edubz/anne_bradstreet | 3742af577c35ec39b2c9533b7e08d4690ae42bbc | 2022-03-09T23:44:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | edubz | null | edubz/anne_bradstreet | 16 | 1 | transformers | 9,304 | ---
license: mit
---
This model was trained on a new dataset composed of available poems by Anne Bradstreet hosted by [Public Domain Poetry.](https://www.public-domain-poetry.com/anne-bradstreet) Specifically I downloaded all 40 poems and fine-tuned a bert-base-uncased text classification model on Amazon SageMaker. For the negative class, I actually generated GPT-2 samples of length 70. That is to say, for each line of Bradstreet I generated a generic GPT-2 reposes. I considered these responses my negative class.
In the classifier, I had a total of 6947 positive lines written by Anne Bradstreet, and 5219 lines generated by GPT-2 in response, totally a dataset of 12,166 labeled lines. I used only the GPT-2 responses in the training set, keeping the actual Bradstreet lines in the positive samples alone.
I split the train and test set in 80/20, leaving a total of 9732 labeled samples in training, and 2435 samples in test.
These I trained on SageMaker, using the Hugging Face deep learning container. I also used SageMaker Training Compiler, which achieved 64 samples per batch on an ml.p3.2xlarge. After 42 minutes of training, on only 5 epochs, I achieved a train loss of 0.0714. Test loss is forthcoming.
In my own tests, the model seems to be always very confident. That is to say, it routinely gives a confidence score of at least 99.8%. All predictions should be single-lines only, as this is how the model was fine-tuned. Multiple lines in a prediction request will always result in a Label0 response, ie not written by Anne Bradstreet, even if pulled directly from her works.
In short, the model seems to know the difference between generic GPT-2 text responding to a Bradstreet prompt, vs the output of a model fine-tuned on Bradstreet text and generating based on Bradstreet responses.
This was developed exclusively for use at an upcoming workshop. |
everdoubling/byt5-Korean-small | 19e0bc2f3ed5b723c2c36903eed6f14beb037d8a | 2022-03-12T15:43:05.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | everdoubling | null | everdoubling/byt5-Korean-small | 16 | 2 | transformers | 9,305 | ---
datasets:
- mc4
license: apache-2.0
---
# ByT5-Korean - small
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
## Encoding Scheme
```text
id: token
0: <pad>
1: <eos>
2: <unk>
3~258: utf-8 encoding
259~277: beginning consonants(초성), 19개(ㄱㄲㄴㄷㄸㄹㅁㅂㅃㅅㅆㅇㅈㅉㅊㅋㅌㅍㅎ)
278~298: middle vowel(중성), 21개(ㅏㅐㅑㅒㅓㅔㅕㅖㅗㅘㅙㅚㅛㅜㅝㅞㅟㅠㅡㅢㅣ)
299~326: final consonant(종성), 무종성+27개(ㄱㄲㄳㄴㄵㄶㄷㄹㄺㄻㄼㄽㄾㄿㅀㅁㅂㅄㅅㅆㅇㅈㅊㅋㅌㅍㅎ)
327~384: from <extra_id_0> to <extra_id_57>
```
## Example Inference
```python
import torch
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-small/blob/main/tokenizer.py
from transformers import T5ForConditionalGeneration
tokenizer_jamo = ByT5KoreanTokenizer()
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-small')
input_sentence = '한국어 위키백과(영어: Korean Wikipedia)는 한국어로 운영되는 위키백과의 다언어판 가운데 하나로서, 2002년 10월 11일에 <extra_id_0>. 또한 현재 한국어 위키백과에는 넘겨주기, 토론, 그림 등 페이지로 불리는 모든 문서를 포함하면 총 2,629,860개가 <extra_id_1>되어 있으며, 넘겨주기를 포함한 일반 문서 수는 1,278,560개,[1] 그중 넘겨주기, 막다른 문서를 제외한 일반 문서 수는 573,149개이다.'
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
print(tokenizer_jamo.decode(outputs_jamo[0]))
# <pad><extra_id_0>설립되었다<extra_id_1>đě
```
Additional information coming soon...
|
Neulvo/bert-finetuned-ner | 481498073bc49c16700efcaf504d9a7ee46c161d | 2022-03-15T15:50:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Neulvo | null | Neulvo/bert-finetuned-ner | 16 | null | transformers | 9,306 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357509521443947
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9433269343126617
- name: Accuracy
type: accuracy
value: 0.9861953258374051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0793
- Precision: 0.9358
- Recall: 0.9510
- F1: 0.9433
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0247 | 1.0 | 1756 | 0.0798 | 0.9269 | 0.9435 | 0.9351 | 0.9840 |
| 0.0136 | 2.0 | 3512 | 0.0776 | 0.9309 | 0.9495 | 0.9401 | 0.9857 |
| 0.0097 | 3.0 | 5268 | 0.0793 | 0.9358 | 0.9510 | 0.9433 | 0.9862 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sap-ai-research/BERT-base-uncased-SCD-ACL2022 | b0432a9e3ccaa3de76f98eacbd489017c9ae8d28 | 2022-03-16T00:38:09.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | sap-ai-research | null | sap-ai-research/BERT-base-uncased-SCD-ACL2022 | 16 | null | transformers | 9,307 | ---
license: apache-2.0
---
|
tareknaous/dialogpt-empathetic-dialogues | b5954b503a98a159381515e3ef3b15202f8374b2 | 2022-03-16T18:11:17.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | tareknaous | null | tareknaous/dialogpt-empathetic-dialogues | 16 | null | transformers | 9,308 | Entry not found |
cambridgeltl/simctg_realtoxicityprompts | 3f5dbe468a733df3565dea80816adc5aa3e073d6 | 2022-03-16T21:43:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/simctg_realtoxicityprompts | 16 | null | transformers | 9,309 | Entry not found |
amir36/bert-finetuned-ner | 6b3321308084789b5b4040c913f8578f9df814c5 | 2022-03-17T12:10:24.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | amir36 | null | amir36/bert-finetuned-ner | 16 | null | transformers | 9,310 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9356550580431178
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.9425325760106917
- name: Accuracy
type: accuracy
value: 0.9858421145581916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9357
- Recall: 0.9495
- F1: 0.9425
- Accuracy: 0.9858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0692 | 0.9180 | 0.9347 | 0.9263 | 0.9827 |
| 0.0338 | 2.0 | 3512 | 0.0615 | 0.9328 | 0.9467 | 0.9397 | 0.9854 |
| 0.024 | 3.0 | 5268 | 0.0616 | 0.9357 | 0.9495 | 0.9425 | 0.9858 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
iftekher/bangla_voice | d8e8e83c6197bc3c16d5d672539e0bdab243dabb | 2022-05-30T10:03:21.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | iftekher | null | iftekher/bangla_voice | 16 | 1 | transformers | 9,311 | ---
tags:
- generated_from_trainer
model-index:
- name: bangla_voice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla_voice
This model is a fine-tuned version of [iftekher/bangla_voice](https://huggingface.co/iftekher/bangla_voice) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 208.2614
- Wer: 0.3201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 158.927 | 0.21 | 100 | 81.4025 | 0.3489 |
| 206.3938 | 0.42 | 200 | 117.4497 | 0.3680 |
| 194.8868 | 0.64 | 300 | 473.2094 | 0.3622 |
| 177.3037 | 0.85 | 400 | 81.0834 | 0.3585 |
| 150.9285 | 1.06 | 500 | 397.6080 | 0.3592 |
| 164.899 | 1.27 | 600 | 71.5732 | 0.3476 |
| 157.9872 | 1.48 | 700 | 76.6225 | 0.3560 |
| 139.5956 | 1.69 | 800 | 76.4330 | 0.3512 |
| 132.7378 | 1.91 | 900 | 154.8127 | 0.3378 |
| 137.2875 | 2.12 | 1000 | 275.6554 | 0.3453 |
| 128.1135 | 2.33 | 1100 | 210.1160 | 0.3409 |
| 124.5749 | 2.54 | 1200 | 109.8560 | 0.3400 |
| 115.9728 | 2.75 | 1300 | 165.5507 | 0.3373 |
| 120.9464 | 2.97 | 1400 | 248.8096 | 0.3357 |
| 104.8963 | 3.18 | 1500 | 308.7221 | 0.3361 |
| 115.9144 | 3.39 | 1600 | 214.0615 | 0.3300 |
| 109.0966 | 3.6 | 1700 | 197.1803 | 0.3286 |
| 111.4354 | 3.81 | 1800 | 189.1278 | 0.3245 |
| 111.9318 | 4.03 | 1900 | 191.4921 | 0.3282 |
| 109.2148 | 4.24 | 2000 | 185.1797 | 0.3298 |
| 114.0561 | 4.45 | 2100 | 190.5829 | 0.3229 |
| 105.7045 | 4.66 | 2200 | 209.0799 | 0.3220 |
| 127.4207 | 4.87 | 2300 | 208.2614 | 0.3201 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
celine98/canine-s-finetuned-sst2 | 3e85d1e3ddb84b98b0766fe587763f45dd6fb821 | 2022-03-22T09:47:45.000Z | [
"pytorch",
"tensorboard",
"canine",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | celine98 | null | celine98/canine-s-finetuned-sst2 | 16 | null | transformers | 9,312 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-s-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8577981651376146
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-sst2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Accuracy: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 |
| 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 |
| 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 |
| 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 |
| 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
avishvj/biobert-protein-ner | 877faa1656b73ef75b2807614f45f37316f90d6c | 2022-03-22T09:51:20.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | avishvj | null | avishvj/biobert-protein-ner | 16 | null | transformers | 9,313 | Entry not found |
Wende/bert-finetuned-ner | ddcc13b7b2f4b6b314d132f237005e95c59f1bad | 2022-03-25T16:19:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Wende | null | Wende/bert-finetuned-ner | 16 | null | transformers | 9,314 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9321670242614293
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9412548954253812
- name: Accuracy
type: accuracy
value: 0.9860334373344322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Precision: 0.9322
- Recall: 0.9505
- F1: 0.9413
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2219 | 1.0 | 878 | 0.0716 | 0.9076 | 0.9288 | 0.9181 | 0.9808 |
| 0.0453 | 2.0 | 1756 | 0.0597 | 0.9297 | 0.9477 | 0.9386 | 0.9852 |
| 0.0239 | 3.0 | 2634 | 0.0575 | 0.9322 | 0.9505 | 0.9413 | 0.9860 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
AFreud/bert-base-romanian-ner-finetuned-ner | 1320a3cf9902d2b5a19417017b9a05a3cd7e7646 | 2022-03-27T06:43:39.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | AFreud | null | AFreud/bert-base-romanian-ner-finetuned-ner | 16 | null | transformers | 9,315 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-romanian-ner-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-romanian-ner-finetuned-ner
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-ner](https://huggingface.co/dumitrescustefan/bert-base-romanian-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0539
- Precision: 0.9662
- Recall: 0.9758
- F1: 0.9710
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0538 | 1.0 | 5500 | 0.0539 | 0.9662 | 0.9758 | 0.9710 | 0.9861 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
princeton-nlp/CoFi-QNLI-s95 | 7d6224418fece2b0e8d484dea11574c4cacd74f2 | 2022-05-01T01:20:12.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
] | text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-QNLI-s95 | 16 | null | transformers | 9,316 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset QNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
yonichi/cbert | f24c441b2d40180c4d7728199221d07e2e6e960a | 2022-03-31T20:40:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | yonichi | null | yonichi/cbert | 16 | null | transformers | 9,317 | |
hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es | 5f440e803f67d5f6ab528ae744aef81dd1dcfeed | 2022-04-03T14:51:24.000Z | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:squad_es",
"dataset:hackathon-pln-es/biomed_squad_es_v2",
"transformers",
"autotrain_compatible"
] | question-answering | false | hackathon-pln-es | null | hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es | 16 | null | transformers | 9,318 | ---
language: es
datasets:
- squad_es
- hackathon-pln-es/biomed_squad_es_v2
metrics:
- "f1"
---
# roberta-base-biomedical-clinical-es for QA
This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Motivation
Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models.
The models trained during the [Hackathon](https://somosnlp.org/hackathon) were:
[hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es)
[hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es)
## Description
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
## Hyperparameters
The hyperparameters were chosen based on those used in [PlanTL-GOB-ES/roberta-base-bne-sqac](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac), a spanish-based QA model trained on a dataset with SQUAD v1 fromat.
```
--num_train_epochs 2
--learning_rate 3e-5
--weight_decay 0.01
--max_seq_length 386
--doc_stride 128
```
## Performance
Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2) dev set.
|Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1|
|--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------|
|hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 |
|hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 |
|hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304|
|hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |34.4767|44.3294|45.3737 |65.307 |23.8261 |23.8261 |
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |
alexjercan/codet5-base-buggy-error-description | 5c6897edc1220c485674673ae2994a2a078d1195 | 2022-04-09T11:26:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | alexjercan | null | alexjercan/codet5-base-buggy-error-description | 16 | 1 | transformers | 9,319 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: codet5-base-buggy-error-description
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-base-buggy-error-description
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Miniproject/BERT | 77adb28c4f34a1efccc0cfb19de58282fe50c17e | 2022-04-07T20:26:36.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"transformers"
] | text-classification | false | Miniproject | null | Miniproject/BERT | 16 | null | transformers | 9,320 | ---
language:
- en
---
# Bert-base-uncased-sentiment
BERT stands for Bidirectional Encoder Representations from Transformers. It is a recent paper published by researchers at Google AI Language. BERT makes use of Transformer, an attention mechanism that learns contextual relations between words (or sub-words) in a text. In its vanilla form, Transformer includes two separate mechanisms — an encoder that reads the text input and a decoder that produces a prediction for the task. Since BERT’s goal is to generate a language model, only the encoder mechanism is necessary.
Bidirectional - to understand the text you’re looking you’ll have to look back (at the previous words) and forward (at the next words)
Transformers - The "Attention Is All You Need" paper presented the Transformer model. The Transformer reads entire sequences of tokens at once. In a sense, the model is non-directional, while LSTMs read sequentially (left-to-right or right-to-left). The attention mechanism allows for learning contextual relations between words.
(Pre-trained) contextualized word embeddings - The ELMO paper introduced a way to encode words based on their meaning/context. Nails has multiple meanings - fingernails and metal nails. BERT was trained by masking 15% of the tokens with the goal to guess them. An additional objective was to predict the next sentence. Let’s look at examples of these tasks:
Masked Language Modeling (Masked LM)
The objective of this task is to guess the masked tokens.
Before feeding word sequences into BERT, 15% of the words in each sentence are replaced with a masked. This means that it is converted to a token which is called "masked token". Then the job of BERT is to predict that hidden or masked word in the sentence by looking at the words (non-masked words) around that masked word. The model then attempts to predict the original value of the masked words, based on the context provided by the other, non-masked, words in the sequence.
That’s [mask] she [mask] -> That’s what she said
Next Sentence Prediction (NSP)
In this training process, BERT receives pairs of sentences as input and learns to predict if the second sentence in the pair of the first sentence (which means that the second sentence occurs just after the first sentence in our training corpus).
During training, 50% of the inputs are pairs in which the second sentence is the the pair of first sentence, while in the other 50%, it is just a random sentence from the corpus which is chosen as a second sentence. That means the other 50% doesn't forms a pair.
BERT Training Dataset
The training corpus was comprised of two entries: Toronto Book Corpus (800M words) and English Wikipedia (2,500M words). While the original Transformer has an encoder (for reading the input) and a decoder (that makes the prediction), BERT uses only the decoder.
BERT is simply a pre-trained stack of Transformer Encoders. How many Encoders? We have two versions - with 12 (BERT base) and 24 (BERT Large).BERT is based on stacked layers of encoders. The difference between BERT base and BERT large is on the number of encoder layers. BERT base model has 12 encoder layers stacked on top of each other whereas BERT large has 24 layers of encoders stacked on top of each other. BERT performs better than the other models. And BERT large increases the performance of BERT base further.
The BERT paper was released along with the source code and pre-trained models.
The best part is that you can do Transfer Learning (thanks to the ideas from OpenAI Transformer) with BERT for many NLP tasks - Classification, Question Answering, Entity Recognition, etc. You can train with small amounts of data and achieve great performance!
This a bert-base-uncased model finetuned for sentiment analysis on product reviews in the English language. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
|
Fredvv/bert-finetuned-pos | cd8fe5aa696527dcbb182c4aef1d6103da166ca2 | 2022-04-07T13:49:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Fredvv | null | Fredvv/bert-finetuned-pos | 16 | null | transformers | 9,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-pos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9347682119205298
- name: Recall
type: recall
value: 0.9501851228542578
- name: F1
type: f1
value: 0.9424136204306459
- name: Accuracy
type: accuracy
value: 0.9867840113027609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0580
- Precision: 0.9348
- Recall: 0.9502
- F1: 0.9424
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0875 | 1.0 | 1756 | 0.0680 | 0.9158 | 0.9352 | 0.9254 | 0.9826 |
| 0.0321 | 2.0 | 3512 | 0.0611 | 0.9289 | 0.9448 | 0.9368 | 0.9856 |
| 0.0222 | 3.0 | 5268 | 0.0580 | 0.9348 | 0.9502 | 0.9424 | 0.9868 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
course5i/SEAD-L-6_H-384_A-12-sst2 | 1678ebacee0aa256592d4deb70e37d95aa36c93b | 2022-06-12T19:44:40.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"en",
"dataset:glue",
"dataset:sst2",
"arxiv:1910.01108",
"arxiv:1909.10351",
"arxiv:2002.10957",
"arxiv:1810.04805",
"arxiv:1804.07461",
"arxiv:1905.00537",
"transformers",
"SEAD",
"license:apache-2.0"
] | text-classification | false | course5i | null | course5i/SEAD-L-6_H-384_A-12-sst2 | 16 | null | transformers | 9,322 | ---
language:
- en
license: apache-2.0
tags:
- SEAD
datasets:
- glue
- sst2
---
## Paper
## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63)
Aurthors: *Moyan Mei*, *Rohit Sroch*
## Abstract
With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks.
*Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63).
Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).*
## SEAD-L-6_H-384_A-12-sst2
This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **sst2** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
## All SEAD Checkpoints
Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD)
## Intended uses & limitations
More information needed
### Training hyperparameters
Please take a look at the `training_args.bin` file
```python
$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))
```
### Evaluation results
| eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples |
|:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:|
| 0.9312 | 1.5334 | 568.684 | 18.261 | 0.2929 | 872 |
### Framework versions
- Transformers >=4.8.0
- Pytorch >=1.6.0
- TensorFlow >=2.5.0
- Flax >=0.3.5
- Datasets >=1.10.2
- Tokenizers >=0.11.6
If you use these models, please cite the following paper:
```
@article{article,
author={Mei, Moyan and Sroch, Rohit},
title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding},
volume={3},
number={1},
journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
day={26},
year={2022},
month={Feb},
url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}
}
```
|
V3RX2000/distilbert-base-uncased-finetuned-emotion | dc2c8257ace8b1df1ad8485eb089f7507bca2ebe | 2022-04-10T12:32:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | V3RX2000 | null | V3RX2000/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,323 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247142990809298
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.9245
- F1: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8812 | 1.0 | 250 | 0.3301 | 0.906 | 0.9035 |
| 0.2547 | 2.0 | 500 | 0.2285 | 0.9245 | 0.9247 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
issifuamajeed/distilbert-base-uncased-finetuned-ner | d5394ab8ea800da640f0217566423f6dd86ecf22 | 2022-07-13T16:41:05.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | issifuamajeed | null | issifuamajeed/distilbert-base-uncased-finetuned-ner | 16 | null | transformers | 9,324 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9227969559942649
- name: Recall
type: recall
value: 0.9360107394563151
- name: F1
type: f1
value: 0.9293568810396535
- name: Accuracy
type: accuracy
value: 0.9833034139831922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9228
- Recall: 0.9360
- F1: 0.9294
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2433 | 1.0 | 878 | 0.0732 | 0.9079 | 0.9190 | 0.9134 | 0.9795 |
| 0.0553 | 2.0 | 1756 | 0.0599 | 0.9170 | 0.9333 | 0.9251 | 0.9826 |
| 0.0305 | 3.0 | 2634 | 0.0614 | 0.9228 | 0.9360 | 0.9294 | 0.9833 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
azert99/finetuning-sentiment-model-3000-samples | dbb7b5fb066f6ff06dc8ee8161b14d3748276786 | 2022-04-18T04:48:42.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | azert99 | null | azert99/finetuning-sentiment-model-3000-samples | 16 | null | transformers | 9,325 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8817891373801918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3223
- Accuracy: 0.8767
- F1: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
IDEA-CCNL/Taiyi-Roberta-124M-D | ede33581f91dce029e0037b31a6371986ae83798 | 2022-06-13T03:26:46.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"mutlimodal",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | IDEA-CCNL | null | IDEA-CCNL/Taiyi-Roberta-124M-D | 16 | null | transformers | 9,326 | ---
language:
- en
license: apache-2.0
tags:
- roberta
- mutlimodal
- exbert
inference: false
---
# Taiyi-Roberta-124M-D model (English)
Based on pre-trained Roberta-base, we introduce multimodal information.
For multimodal pre-training tasks, we design several special training objectives in our paper.
Our code and details of pre-training tasks will be made publicly available upon paper acceptance.
The pre-training datasets are MSCOCO and VG. "D" implies a special training method.
# Taiyi (太乙)
Taiyi models are a branch of the Fengshenbang (封神榜) series of models.
The models in Taiyi are pre-trained with multimodal pre-training strategies.
# Usage
```python
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D")
model = RobertaModel.from_pretrained("IDEA-CCNL/Taiyi-Roberta-124M-D")
```
# GLUE
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | WNLI |
|------------------------|------|------|------|-------|------|-------|------|------|------|
| Robert-base (official) | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | - |
| Roberta-base (local) | 87.0 | 91.3 | 92.5 | 94.2 | 62.8 | 90.6 | 92.9 | 78.0 | 56.3 |
| Taiyi-Roberta-124M-D (local) | 87.1 | 91.8 | 92.3 | 94.5 | 62.6 | 90.4 | 92.4 | 78.7 | 56.3 |
The local test settings are:
Sequence length: 128, Batch size: 32, Learning rate: 3e-5
An additional dataset WNLI is tested.
# Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
facebook/wav2vec2-conformer-rel-pos-large-100h-ft | 9c280b44d714e16b3d250a8793379167babd14d7 | 2022-06-15T08:17:00.000Z | [
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"transformers",
"speech",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-conformer-rel-pos-large-100h-ft | 16 | null | transformers | 9,327 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
---
# Wav2Vec2-Conformer-Large-100h with Relative Position Embeddings
[Facebook's Wav2Vec2 Conformer (TODO-add link)]()
Wav2Vec2 Conformer with relative position embeddings, pretrained on 960h hours of Librispeech and and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-100h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-100h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
``` |
rmihaylov/pegasus-base-cnn-dailymail-bg | e012d00e071a26ea235e091e9dee71471ef7cb2d | 2022-04-19T08:34:13.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"bg",
"dataset:oscar",
"dataset:chitanka",
"dataset:wikipedia",
"arxiv:1912.08777",
"transformers",
"torch",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | rmihaylov | null | rmihaylov/pegasus-base-cnn-dailymail-bg | 16 | null | transformers | 9,328 | ---
inference: false
language:
- bg
license: mit
datasets:
- oscar
- chitanka
- wikipedia
tags:
- torch
---
# PEGASUS BASE
This model was pretrained on Bulgarian language. It was intorduced in [this paper](https://arxiv.org/pdf/1912.08777.pdf).
## Model description
The training data is private Bulgarian text from CNN, DailyMail articles.
## Intended uses & limitations
You can use the raw model for summarization.
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import PegasusForConditionalGeneration, AutoTokenizer
>>>
>>> model_id = "rmihaylov/pegasus-base-cnn-dailymail-bg"
>>> model = PegasusForConditionalGeneration.from_pretrained(model_id)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>>
>>> text = """Лукашенко поиска още полицията "да защити работническите колективи и организации и медии от заплахите на улицата", а който от държавните медии протестира, изобщо да не се връща на работа. На граничните служби бе наредено да засилят охраната на цялата граница, "за да не се допускат в Беларус от други държави бойци, оръжие, боеприпаси, пари за финансиране на безредиците, защото виждаме, че такива пари пристигат". Министерството на отбраната трябва да следи "движението на войски на НАТО на територията на Полша и Литва, тяхното направление и замисли, които в момента виждаме - и някои от тях ни карат да се замислим - и да не се притеснява да изкарва нашите въоръжени сили и техника в направлението на тяхното придвижване". Лукашенко изрично посочи събитията в град Гродно, "защото там има по-голямо желание за дестабилизация на обстановката, отколкото в Минск". Гродно стана вчера първият по-голям град, в който властите се разбраха с протестиращите да протестират на определени места в центъра на града. Той нарече опозицията "черносотници", тласкащи страната към пропаст и унищожение, както и към сблъсък с "исторически братския руски народ". Медиите трябва специално да се активизират срещу това, заръча Лукашенко."""
>>>
>>> batch = tokenizer(
>>> src_text,
>>> truncation=True,
>>> padding="longest",
>>> return_tensors="pt",
>>> return_token_type_ids=False)
>>>
>>> inputs = {
>>> 'max_length': 150,
>>> 'min_length': 10,
>>> 'do_sample': False,
>>> 'temperature': 1.0,
>>> 'top_k': 50,
>>> 'top_p': 1.0,
>>> 'repetition_penalty': 1.0,
>>> 'no_repeat_ngram_size': 0,
>>> 'use_cache': True,
>>> 'num_beams': 2,
>>> 'length_penalty': 1.0,
>>> 'num_return_sequences': 1,
>>> 'early_stopping': False}
>>>
>>> batch.update(inputs)
>>>
>>> summary = model.generate(**batch)
>>>
>>> tgt_text = tokenizer.batch_decode(summary, skip_special_tokens=True)
>>> print(tgt_text)
['Лукашенко изрично посочи събитията в Гродно, "защото там има по-голямо желание за дестабилизация на обстановката, отколкото в Минск" Той нарече опозицията "черносотници", тласкащи страната към пропаст и унищожение, както и сблъсък с "исторически братския руски народ"']
```
|
GPL/scidocs-tsdae-msmarco-distilbert-margin-mse | 7d01e82612fb5cbd52094177ad4bcb991879873f | 2022-04-19T16:47:04.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | GPL | null | GPL/scidocs-tsdae-msmarco-distilbert-margin-mse | 16 | null | transformers | 9,329 | Entry not found |
liamcripwell/ctrl44-clf | e8c1525c9ca02c30e4562cff4d621f2202d82d98 | 2022-04-21T09:32:40.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers"
] | text-classification | false | liamcripwell | null | liamcripwell/ctrl44-clf | 16 | null | transformers | 9,330 | ---
language: en
---
# CTRL44 Classification model
This is a pretrained version of the 4-class simplification operation classifier presented in the NAACL 2022 paper "Controllable Sentence Simplification via Operation Classification". It was trained on the IRSD classification dataset.
Predictions from this model can be used for input into the [simplification model](https://huggingface.co/liamcripwell/ctrl44-simp) to reproduce pipeline results seen in the paper.
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("liamcripwell/ctrl44-clf")
tokenizer = AutoTokenizer.from_pretrained("liamcripwell/ctrl44-clf")
text = "Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
predicted_class_name = model.config.id2label[predicted_class_id]
``` |
Intel/xlnet-base-cased-mrpc-int8-static | 930f30d3010954dc933050555478366176bfeb83 | 2022-06-10T02:42:26.000Z | [
"pytorch",
"xlnet",
"text-classification",
"en",
"dataset:glue",
"transformers",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/xlnet-base-cased-mrpc-int8-static | 16 | null | transformers | 9,331 | ---
language:
- en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
datasets:
- glue
metrics:
- f1
model-index:
- name: xlnet-base-cased-mrpc-int8-static
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: F1
type: f1
value: 0.8892794376098417
---
# INT8 xlnet-base-cased-mrpc
### Post-training static quantization
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [xlnet-base-cased-mrpc](https://huggingface.co/Intel/xlnet-base-cased-mrpc).
The calibration dataloader is the train dataloader. The default calibration sampling size 300 isn't divisible exactly by batch size 8, so the real sampling size is 304.
### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8893|0.8897|
| **Model size (MB)** |215|448|
### Load with Intel® Neural Compressor:
```python
from neural_compressor.utils.load_huggingface import OptimizedModel
int8_model = OptimizedModel.from_pretrained(
'Intel/xlnet-base-cased-mrpc-int8-static',
)
```
|
ysharma/convnext-tiny-eurosat2700-finetuned | 22ad9befc9bb7daf1c21058f49c20afccc634d42 | 2022-04-23T22:54:43.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers"
] | image-classification | false | ysharma | null | ysharma/convnext-tiny-eurosat2700-finetuned | 16 | null | transformers | 9,332 | Entry not found |
lightonai/RITA_xl | 6866305411c6ab97b5ba7f1fd8049b9059999962 | 2022-05-19T08:23:02.000Z | [
"pytorch",
"rita",
"text-generation",
"protein",
"dataset:uniref-100",
"arxiv:2205.05789",
"transformers"
] | text-generation | false | lightonai | null | lightonai/RITA_xl | 16 | 2 | transformers | 9,333 | ---
language: protein
tags:
- protein
datasets:
- uniref-100
---
# RITA-XL
RITA is a family of autoregressive protein models, developed by a collaboration of [Lighton](https://lighton.ai/), the [OATML group](https://oatml.cs.ox.ac.uk/) at Oxford, and the [Debbie Marks Lab](https://www.deboramarkslab.com/) at Harvard.
Model | #Params | d_model | layers | lm loss uniref-100
--- | --- | --- | --- | --- |
[Small](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
[Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
[Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
[**XLarge**](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
For full results see our preprint: https://arxiv.org/abs/2205.05789
## Usage
Instantiate a model like so:
``` python
from transformers import AutoModel, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lightonai/RITA_xl, trust_remote_code=True")
tokenizer = AutoTokenizer.from_pretrained("lightonai/RITA_xl")
```
for generation we support pipelines:
``` python
from transformers import pipeline
rita_gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2,
num_return_sequences=2, eos_token_id=2)
for seq in sequences:
print(f"seq: {seq['generated_text'].replace(' ', '')}")
```
## How to cite
@article{hesslow2022rita,
title={RITA: a Study on Scaling Up Generative Protein Sequence Models},
author={Hesslow, Daniel and Zanichelli, Niccol{\'o} and Notin, Pascal and Poli, Iacopo and Marks, Debora},
journal={arXiv preprint arXiv:2205.05789},
year={2022}
}
|
hustvl/yolos-small-dwr | 4a603978475efb3929cdcc076c4ef73f38c020c0 | 2022-06-27T08:38:00.000Z | [
"pytorch",
"yolos",
"object-detection",
"dataset:coco",
"arxiv:2106.00666",
"transformers",
"vision",
"license:apache-2.0"
] | object-detection | false | hustvl | null | hustvl/yolos-small-dwr | 16 | 1 | transformers | 9,334 | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (small-sized, fast model scaling) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small-dwr')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small-dwr')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **37.6** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Alassea/glue_sst_classifier | 72560eb034a46daa0a0c14afb66a742da92de336 | 2022-04-26T12:20:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Alassea | null | Alassea/glue_sst_classifier | 16 | null | transformers | 9,335 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: glue_sst_classifier
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: F1
type: f1
value: 0.9033707865168539
- name: Accuracy
type: accuracy
value: 0.9013761467889908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2359
- F1: 0.9034
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.3653 | 0.19 | 100 | 0.3213 | 0.8717 | 0.8727 |
| 0.291 | 0.38 | 200 | 0.2662 | 0.8936 | 0.8911 |
| 0.2239 | 0.57 | 300 | 0.2417 | 0.9081 | 0.9060 |
| 0.2306 | 0.76 | 400 | 0.2359 | 0.9105 | 0.9094 |
| 0.2185 | 0.95 | 500 | 0.2371 | 0.9011 | 0.8991 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
manueltonneau/bert-twitter-en-job-search | 73c2428b3433fb69a89587baca524aff78f4157e | 2022-04-26T15:59:06.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2203.09178",
"transformers"
] | text-classification | false | manueltonneau | null | manueltonneau/bert-twitter-en-job-search | 16 | null | transformers | 9,336 | ---
language: en # <-- my language
widget:
- text: "Job hunting!"
---
# Detection of employment status disclosures on Twitter
## Model main characteristics:
- class: Job Search (1), else (0)
- country: US
- language: English
- architecture: BERT base
## Model description
This model is a version of `DeepPavlov/bert-base-cased-conversational` finetuned to recognize English tweets where a user mentions that she is currently looking for a job. It was trained on English tweets from US-based users. The task is framed as a binary classification problem with:
- the positive class referring to tweets mentioning that a user is currently looking for a job (label=1)
- the negative class referring to all other tweets (label=0)
## Resources
The dataset of English tweets on which this classifier was trained is open-sourced [here](https://github.com/manueltonneau/twitter-unemployment).
Details on the performance can be found in our [ACL 2022 paper](https://arxiv.org/abs/2203.09178).
## Citation
If you find this model useful, please cite our paper (citation to come soon). |
nbroad/longformer-base-health-fact | a296005ed3d0c28917c4a316bad87cec38ad1cca | 2022-06-29T18:29:46.000Z | [
"pytorch",
"longformer",
"text-classification",
"en",
"dataset:health_fact",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | nbroad | null | nbroad/longformer-base-health-fact | 16 | null | transformers | 9,337 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- health_fact
model-index:
- name: longformer-base-health-fact2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: health_fact
type: health_fact
split: test
metrics:
- name: F1
type: f1
value: 0.6732897445517078
- name: Accuracy
type: accuracy
value: 0.797242497972425
- name: False Accuracy
type: accuracy
value: 0.8092783505154639
- name: Mixture Accuracy
type: accuracy
value: 0.5323383084577115
- name: True Accuracy
type: accuracy
value: 0.9081803005008348
- name: Unproven Accuracy
type: accuracy
value: 0.4
---
# longformer-base-health-fact2
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the health_fact dataset.
It achieves the following results on the VALIDATION set:
- Loss: 0.5858
- Micro F1: 0.8122
- Macro F1: 0.6830
- False F1: 0.7941
- Mixture F1: 0.5015
- True F1: 0.9234
- Unproven F1: 0.5128
The following are the results on the TEST set:
- Macro F1: 0.6732897445517078
- Accuracy: 0.797242497972425
- False Accuracy: 0.8092783505154639
- Mixture Accuracy: 0.5323383084577115
- True Accuracy: 0.9081803005008348
- Unproven Accuracy: 0.4
## Model description
The health fact dataset is for building fact-checking models related to health. Here is how you can use this model:
```python
import torch
from transformers import pipeline
claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him."
text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, “EEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, “If you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, “How dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, “Oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s “I left my mother and came to Seoul” and the mother’s “I won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (“My little girl ran away, scared of my mom’s eye” and “I screamed at her, ‘How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … “your mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, “mom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, “who are you? !” “I don’t know you!! !” as if trying to make that real. I screamed at her, “How dare you come to my house and scare my daughter!” “GET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, “oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, ‘it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to “connect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another — we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the “notes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner “dates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce."
label = "false"
device = 0 if torch.cuda.is_available() else -1
pl = pipeline("text-classification", model="nbroad/longformer-base-health-fact", device=device)
input_text = claim+pl.tokenizer.sep_token+text
print(len(pl.tokenizer(input_text).input_ids))
# 2361 (which is why longformer is useful)
pl(input_text)
# [{'label': 'false', 'score': 0.8015491962432861}]
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:|
| 0.555 | 1.0 | 613 | 0.5243 | 0.7842 | 0.5535 | 0.7698 | 0.4170 | 0.8938 | 0.1333 |
| 0.4282 | 2.0 | 1226 | 0.5008 | 0.8031 | 0.6393 | 0.7829 | 0.4605 | 0.9199 | 0.3939 |
| 0.2897 | 3.0 | 1839 | 0.5858 | 0.8122 | 0.6830 | 0.7941 | 0.5015 | 0.9234 | 0.5128 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
Calin/convnext-tiny-finteuned-eurosat | 6a193545136d094aa788f6960c4396bf77630a45 | 2022-04-27T15:28:02.000Z | [
"pytorch",
"convnext",
"image-classification",
"transformers"
] | image-classification | false | Calin | null | Calin/convnext-tiny-finteuned-eurosat | 16 | null | transformers | 9,338 | Entry not found |
Sathira/autotrain-mbtiNlp-798824628 | 5cec12d4fa5398b82a4d3aedae2942e2573171c9 | 2022-04-28T22:09:14.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Sathira/autotrain-data-mbtiNlp",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Sathira | null | Sathira/autotrain-mbtiNlp-798824628 | 16 | null | transformers | 9,339 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sathira/autotrain-data-mbtiNlp
co2_eq_emissions: 121.67185089502216
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 798824628
- CO2 Emissions (in grams): 121.67185089502216
## Validation Metrics
- Loss: 0.5046824812889099
- Accuracy: 0.8472124039775673
- Macro F1: 0.7812978033330673
- Micro F1: 0.8472124039775673
- Weighted F1: 0.8464983956259307
- Macro Precision: 0.812208631055716
- Micro Precision: 0.8472124039775673
- Weighted Precision: 0.8478968364150775
- Macro Recall: 0.7593223884993787
- Micro Recall: 0.8472124039775673
- Weighted Recall: 0.8472124039775673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sathira/autotrain-mbtiNlp-798824628
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
gui-marra/finetuning-sentiment-model-25000-samples | 3aee7bf286b3fe36f82382d70d92dff2dd06c427 | 2022-05-03T22:48:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | gui-marra | null | gui-marra/finetuning-sentiment-model-25000-samples | 16 | null | transformers | 9,340 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-25000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9314
- name: F1
type: f1
value: 0.932017283069727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-25000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3711
- Accuracy: 0.9314
- F1: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_42 | ba9048bf6984f647a72238b1b18208a36cd2e077 | 2022-05-10T23:43:37.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_42 | 16 | null | transformers | 9,341 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_42 | 23519cb00256882e80221b938432deafefa74c5b | 2022-05-11T00:01:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_42 | 16 | null | transformers | 9,342 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_66 | 6e40419b44bb2c29c7ec49f8b497d3b461545e76 | 2022-05-11T00:18:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_66 | 16 | null | transformers | 9,343 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_66 | 6dea5a622872ebc4e549fe509f2ac8d791f38af8 | 2022-05-11T00:35:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_66 | 16 | null | transformers | 9,344 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_66 | add9ad7c1e8c40ee62d2db0557f60d014ab02996 | 2022-05-11T00:53:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_66 | 16 | null | transformers | 9,345 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_77 | 91d1f5942a547cea1ebf2811b3e1427fa43fa4fc | 2022-05-11T01:10:45.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_77 | 16 | null | transformers | 9,346 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_77 | 66aeeb083408eb95f57fd5c6e69512267bb53d08 | 2022-05-11T01:27:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_77 | 16 | null | transformers | 9,347 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_77 | dd6ea871ff2a00ffa815256ea71e6440bdc206a8 | 2022-05-11T01:45:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_77 | 16 | null | transformers | 9,348 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_88 | 34d5aa6fe1d64ad1a3999b81ce816a99c6cfe3b0 | 2022-05-11T02:03:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_88 | 16 | null | transformers | 9,349 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_88 | e148de5fa68d54c927dffa0cd4d60654e75b2f34 | 2022-05-11T02:20:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_88 | 16 | null | transformers | 9,350 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_88 | af3ae3cd4757468414d50285354356b6a5f6a40d | 2022-05-11T02:37:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_88 | 16 | null | transformers | 9,351 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_99 | edc64611fd9b2d6f6ae9d7457e6b9aaa556c103d | 2022-05-11T02:54:20.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.2-class.exclusive.seed_99 | 16 | null | transformers | 9,352 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_99 | fe3fa9b6ed2d7f0c7298fccbef0149c94bc2168e | 2022-05-11T03:11:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.3-class.exclusive.seed_99 | 16 | null | transformers | 9,353 | Entry not found |
CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_99 | 94181cc3296436d5fc4033f5ebac1febfb3fbc93 | 2022-05-11T03:28:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/roberta-base.CEBaB.sa.5-class.exclusive.seed_99 | 16 | null | transformers | 9,354 | Entry not found |
nikitast/lang-segmentation-roberta | 2e44dd4b93237dfea1787ff7c369a850f15c09cb | 2022-07-18T11:41:03.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"ru",
"uk",
"be",
"kk",
"az",
"hy",
"ka",
"he",
"en",
"de",
"dataset:open_subtitles",
"dataset:tatoeba",
"dataset:oscar",
"transformers",
"language classification",
"text segmentation",
"autotrain_compatible"
] | token-classification | false | nikitast | null | nikitast/lang-segmentation-roberta | 16 | null | transformers | 9,355 | ---
language:
- ru
- uk
- be
- kk
- az
- hy
- ka
- he
- en
- de
tags:
- language classification
- text segmentation
datasets:
- open_subtitles
- tatoeba
- oscar
---
# RoBERTa for Multilabel Language Segmentation
## Training
RoBERTa fine-tuned on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language).
Implemented heuristic algorithm for multilingual training data creation with generation of target masks- https://github.com/n1kstep/lang-classifier
| data source | language |
|-----------------|----------------|
| open_subtitles | ka, he, en, de |
| oscar | be, kk, az, hu |
| tatoeba | ru, uk |
## Validation
The metrics obtained from validation on the another part of dataset (~1k samples per language).
| Validation Loss | Precision | Recall | F1-Score | Accuracy |
|-----------------|-----------|----------|----------|----------|
| 0.029172 | 0.919623 | 0.933586 | 0.926552 | 0.991883 | |
Vikings03/wikineural-multilingual-ner | 1413bc2c7b83194fb0c2b7d9b5f3bfadc0eca47a | 2022-05-13T13:51:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Vikings03 | null | Vikings03/wikineural-multilingual-ner | 16 | null | transformers | 9,356 | Entry not found |
Dizex/bert-finetuned-ner | 8c5b4fceb75a056e5c9ada4bc20de23177d97b32 | 2022-05-15T13:11:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Dizex | null | Dizex/bert-finetuned-ner | 16 | null | transformers | 9,357 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9360609574291867
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9434844310877368
- name: Accuracy
type: accuracy
value: 0.9865338199799847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Precision: 0.9361
- Recall: 0.9510
- F1: 0.9435
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0824 | 1.0 | 1756 | 0.0656 | 0.9133 | 0.9330 | 0.9231 | 0.9825 |
| 0.0405 | 2.0 | 3512 | 0.0586 | 0.9291 | 0.9480 | 0.9384 | 0.9856 |
| 0.0193 | 3.0 | 5268 | 0.0618 | 0.9361 | 0.9510 | 0.9435 | 0.9865 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
alwaysgetbetter/bert-finetuned-ner | 80887a4b501be552657bb51d151af9797819ef2e | 2022-05-17T10:21:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | alwaysgetbetter | null | alwaysgetbetter/bert-finetuned-ner | 16 | null | transformers | 9,358 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9331679073614557
- name: Recall
type: recall
value: 0.9493436553349041
- name: F1
type: f1
value: 0.9411862851422373
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Precision: 0.9332
- Recall: 0.9493
- F1: 0.9412
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0866 | 1.0 | 1756 | 0.0708 | 0.9142 | 0.9347 | 0.9244 | 0.9823 |
| 0.0405 | 2.0 | 3512 | 0.0574 | 0.9231 | 0.9480 | 0.9354 | 0.9853 |
| 0.0191 | 3.0 | 5268 | 0.0608 | 0.9332 | 0.9493 | 0.9412 | 0.9861 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
awilli/bert-finetuned-ner | 6f51f7552860716b6d8ac7caf47288ce8f28be7b | 2022-05-19T08:14:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | awilli | null | awilli/bert-finetuned-ner | 16 | null | transformers | 9,359 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9295401918623883
- name: Recall
type: recall
value: 0.9458094917536183
- name: F1
type: f1
value: 0.9376042709376042
- name: Accuracy
type: accuracy
value: 0.9848413492670866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0673
- Precision: 0.9295
- Recall: 0.9458
- F1: 0.9376
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0846 | 1.0 | 1756 | 0.0660 | 0.9073 | 0.9344 | 0.9207 | 0.9820 |
| 0.0409 | 2.0 | 3512 | 0.0622 | 0.9230 | 0.9456 | 0.9342 | 0.9851 |
| 0.0202 | 3.0 | 5268 | 0.0673 | 0.9295 | 0.9458 | 0.9376 | 0.9848 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Akshat/distilbert-base-uncased-finetuned-emotion | a3b2d0b5b844f3752d42d0ed17856ae32c1e50c2 | 2022-05-21T13:37:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Akshat | null | Akshat/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,360 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9216312760504648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2246
- Accuracy: 0.922
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8424 | 1.0 | 250 | 0.3246 | 0.9025 | 0.8989 |
| 0.2533 | 2.0 | 500 | 0.2246 | 0.922 | 0.9216 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Rewire/XTC | 0d24774ad82ac586f3b7c3e76ce56e2663f710f3 | 2022-05-24T11:20:44.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Rewire | null | Rewire/XTC | 16 | null | transformers | 9,361 | (COMING SOON!)
MULTILINGUAL HATECHECK: Functional Tests for Multilingual Hate Speech Detection Models |
DaveMSE/bert-finetuned-ner | bd0a8070cba94555a6b9ebf49a402820fd209b27 | 2022-05-24T20:10:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | DaveMSE | null | DaveMSE/bert-finetuned-ner | 16 | null | transformers | 9,362 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9333333333333333
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.9413531325602736
- name: Accuracy
type: accuracy
value: 0.9857243774651204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0669
- Precision: 0.9333
- Recall: 0.9495
- F1: 0.9414
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0867 | 1.0 | 1756 | 0.0647 | 0.9227 | 0.9377 | 0.9301 | 0.9838 |
| 0.0383 | 2.0 | 3512 | 0.0603 | 0.9308 | 0.9500 | 0.9403 | 0.9854 |
| 0.0184 | 3.0 | 5268 | 0.0669 | 0.9333 | 0.9495 | 0.9414 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa | 4b340a0dc969e464c75ddac4820d105b51f7c843 | 2022-06-08T15:51:15.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:pn_summary",
"transformers",
"summarization",
"fa",
"Abstractive Summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | ahmeddbahaa | null | ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa | 16 | null | transformers | 9,363 | ---
tags:
- summarization
- fa
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: mT5_multilingual_XLSum-finetuned-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the pn_summary dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5703
- Rouge-1: 45.12
- Rouge-2: 26.25
- Rouge-l: 39.96
- Gen Len: 48.72
- Bertscore: 79.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tzq0301/mT5-news-title-generation | 4583b28b34567f6ce0da946a5fc72b60d96b0daf | 2022-06-01T06:00:12.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | tzq0301 | null | tzq0301/mT5-news-title-generation | 16 | null | transformers | 9,364 | ---
license: mit
---
|
HIT-TMG/Dialogue-BART-large | aa76e28a856b228af02a178f28d107cf169f7ca1 | 2022-06-02T08:48:55.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | HIT-TMG | null | HIT-TMG/Dialogue-BART-large | 16 | null | transformers | 9,365 | Entry not found |
huggingtweets/aksumfootball-geirjordet-slawekmorawski | 80b06866e574f00961d36147151e7dcabdcd5c00 | 2022-06-06T15:21:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/aksumfootball-geirjordet-slawekmorawski | 16 | null | transformers | 9,366 | ---
language: en
thumbnail: http://www.huggingtweets.com/aksumfootball-geirjordet-slawekmorawski/1654528907750/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1318130998757019649/R8dWYi_b_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1255843414135975937/9e-_Lg2V_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1060604477466652675/syszhdwg_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Geir Jordet & Karl Marius Aksum & Sławek Morawski</div>
<div style="text-align: center; font-size: 14px;">@aksumfootball-geirjordet-slawekmorawski</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Geir Jordet & Karl Marius Aksum & Sławek Morawski.
| Data | Geir Jordet | Karl Marius Aksum | Sławek Morawski |
| --- | --- | --- | --- |
| Tweets downloaded | 507 | 2778 | 468 |
| Retweets | 47 | 855 | 122 |
| Short tweets | 22 | 137 | 10 |
| Tweets kept | 438 | 1786 | 336 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3s7mtfgq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aksumfootball-geirjordet-slawekmorawski's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5jtmflz8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5jtmflz8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aksumfootball-geirjordet-slawekmorawski')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Gaborandi/distilbert-pubmed-MLM | db839f81944230c84e5baa2b6d3d375699c853b3 | 2022-06-08T02:55:14.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Gaborandi | null | Gaborandi/distilbert-pubmed-MLM | 16 | null | transformers | 9,367 | Entry not found |
ghadeermobasher/WLT-BioBERT-NCBI | 448fd98b4b4c1cda3787151da8be35fec1d06c45 | 2022-06-09T08:46:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BioBERT-NCBI | 16 | null | transformers | 9,368 | Entry not found |
Skil-Internal/bart-paraphrase-finetuned-xsum-v5 | d1ed69a21fc47d997d24c4fde71c9ab94e081bfc | 2022-06-09T09:42:05.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Skil-Internal | null | Skil-Internal/bart-paraphrase-finetuned-xsum-v5 | 16 | null | transformers | 9,369 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-finetuned-xsum-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum-v5
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 263 | 0.4728 | 38.7072 | 38.5333 | 38.6391 | 38.6212 | 7.0513 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ghadeermobasher/WLT-BlueBERT-NCBI | 8a274f1ef1d29bb10ddb4c3021877d99321c08c1 | 2022-06-09T15:09:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/WLT-BlueBERT-NCBI | 16 | null | transformers | 9,370 | Entry not found |
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned | ed75ffcb9f8d8d6ec9dcf181996bdcd231e185cc | 2022-06-09T23:31:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ajtamayoh | null | ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned | 16 | null | transformers | 9,371 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0537
- Precision: 0.8585
- Recall: 0.7101
- F1: 0.7773
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0693 | 1.0 | 514 | 0.0416 | 0.9485 | 0.6492 | 0.7708 | 0.9884 |
| 0.0367 | 2.0 | 1028 | 0.0396 | 0.9391 | 0.6710 | 0.7827 | 0.9892 |
| 0.0283 | 3.0 | 1542 | 0.0385 | 0.9388 | 0.6889 | 0.7947 | 0.9899 |
| 0.0222 | 4.0 | 2056 | 0.0422 | 0.9456 | 0.6790 | 0.7904 | 0.9898 |
| 0.0182 | 5.0 | 2570 | 0.0457 | 0.9349 | 0.6925 | 0.7956 | 0.9901 |
| 0.013 | 6.0 | 3084 | 0.0484 | 0.8947 | 0.7062 | 0.7894 | 0.9899 |
| 0.0084 | 7.0 | 3598 | 0.0537 | 0.8585 | 0.7101 | 0.7773 | 0.9893 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
kjunelee/distilbert-base-uncased-finetuned-emotion | 7395d53b0501cd63739fa0a8383df383e02abbf6 | 2022-06-10T00:24:32.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | kjunelee | null | kjunelee/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,372 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9313235272564213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1595
- Accuracy: 0.931
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.1873 | 0.924 | 0.9234 |
| 0.1992 | 2.0 | 250 | 0.1649 | 0.929 | 0.9293 |
| 0.1992 | 3.0 | 375 | 0.1595 | 0.931 | 0.9313 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Yehor/wav2vec2-xls-r-300m-uk-with-wiki-lm | 7f84bfa006ec847124b98e9186cb3cdc42e2b6e2 | 2022-07-30T07:00:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"transformers",
"license:cc-by-sa-3.0"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk-with-wiki-lm | 16 | null | transformers | 9,373 | ---
language:
- uk
license: "cc-by-sa-3.0"
datasets:
- mozilla-foundation/common_voice_10_0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model has apostrophes and hyphens.
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV7 (no LM) | 0.0432 | 0.2288 |
| CV7 (with LM) | 0.0267 | 0.1283 |
| CV10 (no LM) | 0.0412 | 0.2206 |
| CV10 (with LM) | 0.025 | 0.1203 |
|
ilhami/Tr_En-MbartFinetune | 0202aa49b954d0782556e8e130d7a6f968934ec8 | 2022-06-12T12:01:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"tr",
"en",
"dataset:Parallel Corpora for Turkish-English Academic Translations",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | ilhami | null | ilhami/Tr_En-MbartFinetune | 16 | null | transformers | 9,374 | ---
language:
- tr
- en
tags:
- translation
license: apache-2.0
datasets:
- Parallel Corpora for Turkish-English Academic Translations
metrics:
- bleu
- sacrebleu
---
## Model Details
- **Developed by:** İlhami SEL
- **Model type:** Mbart Finetune Machine Translation
- **Language:** Turkish - English
- **Resources for more information:** Sel, İ. , Üzen, H. & Hanbay, D. (2021). Creating a Parallel Corpora for Turkish-English Academic Translations . Computer Science , 5th International Artificial Intelligence and Data Processing symposium , 335-340 . DOI: 10.53070/bbd.990959
```python
checkpoint = "ilhami/Tr_En-MbartFinetune"
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to("cuda")
tokenizer.src_lang = "tr_TR"
tr= ["Sohbet robotları son yıllarda yaygın bir şekilde kullanılmaya başlanmıştır. ",
"İnsanları taklit eden ve daha iyi müşteri memnuniyeti sağlayan sohbet robotları en gelişkin doğal dil işleme tekniklerine ihtiyaç duymaktadır. ",
"Bu çalışma sohbet robotu konuşmalarının niyet tahminini geliştirmeye odaklanmıştır." ,
"Kelime gösterimi için TF-IDF, Doc2vec ve BERT gibi geleneksel ve gelişmiş doğal dil işleme yöntemleri, çoklu sınıf ve çoklu etiket tahmini için ise lojistik regresyon, rastgele orman ve yapay sinir ağları kullanılmıştır." ,
"Sohbet robotu konuşma veri kümeleri, sinema bileti rezervasyonu, restoran rezervasyonu ve taksi çağırma olmak üzere üç farklı alandan alınmıştır. ",
"Bu çalışmanın sonunda, BERT ve BERT ile TF-IDF birleşimi modellerin diğer kombinasyonlardan daha iyi sonuç verdiği görülmüştür. ",
"BERT gibi ön eğitimli modellerden faydalanmanın daha iyi bağlamsal anlama sağladığı ortaya çıkmıştır. ",
"TF-IDF yerleştirmeleri, BERT gösterimi ile birleştirilerek niyet kategorisi tahmininin iyileştirilmesi amaçlanmıştır."]
encoded_tr = tokenizer(tr, return_tensors="pt" ,padding=True , truncation=True).to("cuda")
generated_tokens = model.generate(**encoded_tr, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
en = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
|
YuryK/distilbert-base-uncased-finetuned-emotion | 344dc2820fac936ddc3f669366ca4dc1b460d5b5 | 2022-07-15T06:51:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | YuryK | null | YuryK/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,375 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9332773351360893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1669
- Accuracy: 0.933
- F1: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8058 | 1.0 | 250 | 0.2778 | 0.917 | 0.9158 |
| 0.2124 | 2.0 | 500 | 0.1907 | 0.926 | 0.9262 |
| 0.1473 | 3.0 | 750 | 0.1669 | 0.933 | 0.9333 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Adapting/comfort_congratulations_neutral-classifier | 59ed3dacf314425d944e4ab3dc0ff71a9c70546e | 2022-06-27T14:24:27.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Adapting | null | Adapting/comfort_congratulations_neutral-classifier | 16 | null | transformers | 9,376 |
# Adapting/comfort_congratulations_neutral-classifier
code used to train this model: https://colab.research.google.com/drive/1BHc8UMuT0sRyA_M24Acits5oHwUmjsFm?usp=sharing
dataset: https://huggingface.co/datasets/Adapting/empathetic_dialogues_v2
LABEL_0: neutral
LABEL_1: congratulating
LABEL_2: comforting |
ghadeermobasher/BC5CDR-Chem-Modified-BlueBERT-512 | 9e6f3c55bec81471a331f4f53e4b9eb9514e23d7 | 2022-06-13T23:10:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-BlueBERT-512 | 16 | null | transformers | 9,377 | Entry not found |
ghadeermobasher/BC4CHEMD-Chem-Original-PubMedBERT-512 | 7290f7a345b448bf9279a9a41f86ed4335c81713 | 2022-06-15T21:58:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Chem-Original-PubMedBERT-512 | 16 | null | transformers | 9,378 | Entry not found |
ghadeermobasher/BC4CHEMD-Original-BioBERT-384 | 7339c56ee7688e9f168e2e1cdbf4367507d31085 | 2022-06-15T19:17:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BC4CHEMD-Original-BioBERT-384 | 16 | null | transformers | 9,379 | Entry not found |
Salvatore/bert-finetuned-ner | e5deabd6d3a71b816d8b188900f19abca8343ae8 | 2022-06-28T15:24:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Salvatore | null | Salvatore/bert-finetuned-ner | 16 | null | transformers | 9,380 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0997
- Proteinmutation F1: 0.1309
- Snp F1: 0.1953
- Dnamutation F1: 0.3778
- Precision: 0.2380
- Recall: 0.2416
- F1: 0.2398
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Snp F1 | Dnamutation F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------:|:--------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 324 | 0.0533 | 0.0396 | 0.2830 | 0.4667 | 0.2334 | 0.3221 | 0.2707 | 0.9788 |
| 0.1072 | 2.0 | 648 | 0.0437 | 0.6065 | 0.4906 | 0.5009 | 0.4802 | 0.6348 | 0.5468 | 0.9868 |
| 0.1072 | 3.0 | 972 | 0.0592 | 0.1379 | 0.2485 | 0.2005 | 0.1639 | 0.2228 | 0.1889 | 0.9731 |
| 0.0573 | 4.0 | 1296 | 0.0722 | 0.0749 | 0.2530 | 0.4692 | 0.2705 | 0.2959 | 0.2826 | 0.9749 |
| 0.0431 | 5.0 | 1620 | 0.0766 | 0.1574 | 0.1847 | 0.2540 | 0.1766 | 0.2285 | 0.1992 | 0.9723 |
| 0.0431 | 6.0 | 1944 | 0.0805 | 0.1099 | 0.2202 | 0.2383 | 0.1657 | 0.2097 | 0.1851 | 0.9715 |
| 0.0396 | 7.0 | 2268 | 0.0886 | 0.1337 | 0.2138 | 0.4318 | 0.2683 | 0.2678 | 0.2680 | 0.9724 |
| 0.0354 | 8.0 | 2592 | 0.0927 | 0.1535 | 0.2113 | 0.3769 | 0.2505 | 0.2528 | 0.2516 | 0.9714 |
| 0.0354 | 9.0 | 2916 | 0.0978 | 0.1011 | 0.2540 | 0.3812 | 0.2495 | 0.2528 | 0.2512 | 0.9705 |
| 0.0312 | 10.0 | 3240 | 0.0997 | 0.1309 | 0.1953 | 0.3778 | 0.2380 | 0.2416 | 0.2398 | 0.9703 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
loubnabnl/codeparrot-small-megatron | 709781d6de024fdee09d70071b4776e9f4b7902f | 2022-06-21T09:48:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"code",
"dataset:lvwerra/codeparrot-clean-train",
"transformers",
"generation",
"model-index"
] | text-generation | false | loubnabnl | null | loubnabnl/codeparrot-small-megatron | 16 | 0 | transformers | 9,381 | ---
language: code
tags:
- code
- gpt2
- generation
datasets:
- lvwerra/codeparrot-clean-train
widget:
- text: "from transformers import"
example_title: "Transformers"
- text: "def print_hello_world():\n\t"
example_title: "Hello World!"
- text: "def get_file_size(filepath):"
example_title: "File size"
- text: "import numpy as"
example_title: "Numpy"
model-index:
- name: codeparrot
results:
- task:
name: Code Generation
type: code-generation
dataset:
name: "HumanEval"
type: openai_humaneval
metrics:
- name: pass@1
type: code_eval
value: 5.58
- name: pass@10
type: code_eval
value: 8.37
- name: pass@100
type: code_eval
value: 12.6
---
# CodeParrot 🦜
CodeParrot 🦜 is a GPT-2 model (100M parameters) trained to generate Python code. A larger model (1.5B) is also available [here](https://huggingface.co/lvwerra/codeparrot).
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("CodeParrot/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("CodeParrot/codeparrot-small")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model=""CodeParrot/codeparrot-small")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for 150k steps and you find the settings in the following table:
| Parameter| value|
|----|----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 150'000|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 8 x A100 (40GB) GPUs.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | score|
|--------|-----|
|pass@1 | 5.58% |
|pass@10 | 8.38% |
|pass@100 | 12.6% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/lvwerra/codeparrot-clean), [train](https://huggingface.co/datasets/lvwerra/codeparrot-clean-train), [valid](https://huggingface.co/datasets/lvwerra/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](https://huggingface.co/spaces/lvwerra/codeparrot-generation), [highlighting](https://huggingface.co/spaces/lvwerra/codeparrot-highlighting), [Comparison to other code models](https://huggingface.co/spaces/loubnabnl/code-generation-models) |
mindwrapped/gpt2-lotr-fellowship | d04e442e12e7da431a7c4bc78343acc451b964eb | 2022-06-17T02:14:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mindwrapped | null | mindwrapped/gpt2-lotr-fellowship | 16 | null | transformers | 9,382 | Entry not found |
chandrasutrisnotjhong/bert-finetuned-ner | 4513622ae90e678e415d59e33989c41a2dd92afe | 2022-07-04T03:53:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | chandrasutrisnotjhong | null | chandrasutrisnotjhong/bert-finetuned-ner | 16 | null | transformers | 9,383 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9337299619897538
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9422162928374885
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Precision: 0.9337
- Recall: 0.9509
- F1: 0.9422
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0867 | 1.0 | 1756 | 0.0633 | 0.9132 | 0.9369 | 0.9249 | 0.9831 |
| 0.039 | 2.0 | 3512 | 0.0599 | 0.9333 | 0.9495 | 0.9414 | 0.9862 |
| 0.0202 | 3.0 | 5268 | 0.0637 | 0.9337 | 0.9509 | 0.9422 | 0.9861 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zdreiosis/ff_analysis_5 | 81b30320303e2b42ddaf608071670bc8363d4327 | 2022-06-18T14:54:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"gen_ffa",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | zdreiosis | null | zdreiosis/ff_analysis_5 | 16 | null | transformers | 9,384 | ---
license: apache-2.0
tags:
- gen_ffa
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ff_analysis_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ff_analysis_5
This model is a fine-tuned version of [zdreiosis/ff_analysis_5](https://huggingface.co/zdreiosis/ff_analysis_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- F1: 0.9306
- Roc Auc: 0.9483
- Accuracy: 0.8137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 0.27 | 50 | 0.0846 | 0.9305 | 0.9476 | 0.8075 |
| No log | 0.55 | 100 | 0.1000 | 0.9070 | 0.9320 | 0.7484 |
| No log | 0.82 | 150 | 0.0945 | 0.9126 | 0.9349 | 0.7640 |
| No log | 1.1 | 200 | 0.0973 | 0.9119 | 0.9353 | 0.7764 |
| No log | 1.37 | 250 | 0.0880 | 0.9336 | 0.9504 | 0.8261 |
| No log | 1.65 | 300 | 0.0857 | 0.9246 | 0.9434 | 0.8043 |
| No log | 1.92 | 350 | 0.0844 | 0.9324 | 0.9488 | 0.8199 |
| No log | 2.2 | 400 | 0.0881 | 0.9232 | 0.9450 | 0.7888 |
| No log | 2.47 | 450 | 0.0875 | 0.9277 | 0.9462 | 0.8012 |
| 0.1226 | 2.75 | 500 | 0.0824 | 0.9306 | 0.9483 | 0.8137 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
NouRed/segformer-b0-finetuned-segments-water-2 | 43085b11babf0d38cc12bdf28939264d15ce408c | 2022-06-29T22:43:41.000Z | [
"pytorch",
"tensorboard",
"segformer",
"transformers",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-segmentation | false | NouRed | null | NouRed/segformer-b0-finetuned-segments-water-2 | 16 | null | transformers | 9,385 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-water-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-water-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the NouRed/water_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5551
- Mean Iou: nan
- Mean Accuracy: nan
- Overall Accuracy: nan
- Per Category Iou: [nan, nan]
- Per Category Accuracy: [nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------:|:---------------------:|
| 0.5065 | 6.67 | 20 | 0.5551 | nan | nan | nan | [nan, nan] | [nan, nan] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
TariqYousef/german-intensifiers-tagging | 0f8a6ff8162256a9992f86316710d0e1695786dd | 2022-06-22T23:36:03.000Z | [
"pytorch",
"bert",
"token-classification",
"de",
"transformers",
"token classificaition",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | TariqYousef | null | TariqYousef/german-intensifiers-tagging | 16 | null | transformers | 9,386 | ---
language:
- de
tags:
- token classificaition
license: cc-by-4.0
---
### German Intesifiers Tagging |
sudo-s/exper_batch_32_e8 | 99091befda7e8eb00eeb8621248f729c8b1d706b | 2022-06-26T23:45:06.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | sudo-s | null | sudo-s/exper_batch_32_e8 | 16 | null | transformers | 9,387 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper_batch_32_e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper_batch_32_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3520
- Accuracy: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3787 | 0.31 | 100 | 3.3100 | 0.3566 |
| 2.3975 | 0.62 | 200 | 2.3196 | 0.5717 |
| 1.5578 | 0.94 | 300 | 1.6764 | 0.6461 |
| 1.0291 | 1.25 | 400 | 1.1713 | 0.7463 |
| 0.8185 | 1.56 | 500 | 0.9292 | 0.7953 |
| 0.6181 | 1.88 | 600 | 0.7732 | 0.8169 |
| 0.3873 | 2.19 | 700 | 0.6877 | 0.8277 |
| 0.2979 | 2.5 | 800 | 0.6250 | 0.8404 |
| 0.2967 | 2.81 | 900 | 0.6151 | 0.8365 |
| 0.1874 | 3.12 | 1000 | 0.5401 | 0.8608 |
| 0.2232 | 3.44 | 1100 | 0.5032 | 0.8712 |
| 0.1109 | 3.75 | 1200 | 0.4635 | 0.8774 |
| 0.0539 | 4.06 | 1300 | 0.4495 | 0.8843 |
| 0.0668 | 4.38 | 1400 | 0.4273 | 0.8951 |
| 0.0567 | 4.69 | 1500 | 0.4427 | 0.8867 |
| 0.0285 | 5.0 | 1600 | 0.4092 | 0.8955 |
| 0.0473 | 5.31 | 1700 | 0.3720 | 0.9071 |
| 0.0225 | 5.62 | 1800 | 0.3691 | 0.9063 |
| 0.0196 | 5.94 | 1900 | 0.3775 | 0.9048 |
| 0.0173 | 6.25 | 2000 | 0.3641 | 0.9040 |
| 0.0092 | 6.56 | 2100 | 0.3551 | 0.9090 |
| 0.008 | 6.88 | 2200 | 0.3591 | 0.9125 |
| 0.0072 | 7.19 | 2300 | 0.3542 | 0.9121 |
| 0.007 | 7.5 | 2400 | 0.3532 | 0.9106 |
| 0.007 | 7.81 | 2500 | 0.3520 | 0.9113 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RuiqianLi/Malaya-speech_fine-tune_realcase_27_Jun | a8e98583be38db4448e91f5165f808346698427c | 2022-06-30T02:09:05.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:uob_singlish",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | RuiqianLi | null | RuiqianLi/Malaya-speech_fine-tune_realcase_27_Jun | 16 | null | transformers | 9,388 | ---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: Malaya-speech_fine-tune_realcase_27_Jun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malaya-speech_fine-tune_realcase_27_Jun
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9159
- Wer: 0.3819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3176 | 1.82 | 20 | 0.8928 | 0.3542 |
| 0.6716 | 3.64 | 40 | 0.9123 | 0.3681 |
| 0.3484 | 5.45 | 60 | 0.9509 | 0.3681 |
| 0.3064 | 7.27 | 80 | 0.9227 | 0.3958 |
| 0.3017 | 9.09 | 100 | 0.9159 | 0.3819 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Eleven/distilbert-base-uncased-finetuned-emotion | f8c004744cc8f77ba103f6d775df8012b343562f | 2022-07-22T15:05:00.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Eleven | null | Eleven/distilbert-base-uncased-finetuned-emotion | 16 | null | transformers | 9,389 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.9225
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8571 | 1.0 | 250 | 0.3333 | 0.902 | 0.8982 |
| 0.2507 | 2.0 | 500 | 0.2263 | 0.9225 | 0.9221 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
ccdv/lsg-bart-base-16384 | fc051b4ccce9caae550cc5e2d2fb8134453ab95d | 2022-07-25T05:35:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:1910.13461",
"transformers",
"summarization",
"long context",
"fill-mask",
"autotrain_compatible"
] | fill-mask | false | ccdv | null | ccdv/lsg-bart-base-16384 | 16 | null | transformers | 9,390 | ---
tags:
- summarization
- bart
- long context
language:
- en
pipeline_tag: fill-mask
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-16384", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-bart-base-16384",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**BART**
```
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Salvatore/bert-finetuned-mutation-recognition-0 | 336556be7865ad600dbfeb68eb00264bc214d8ef | 2022-06-29T13:41:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Salvatore | null | Salvatore/bert-finetuned-mutation-recognition-0 | 16 | null | transformers | 9,391 | Entry not found |
Salvatore/bert-finetuned-mutation-recognition-1 | dc9611f72e763247c259f3e374b135af4115f8c4 | 2022-06-29T13:59:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Salvatore | null | Salvatore/bert-finetuned-mutation-recognition-1 | 16 | null | transformers | 9,392 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-mutation-recognition-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mutation-recognition-1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0380
- Proteinmutation F1: 0.8631
- Dnamutation F1: 0.7522
- Snp F1: 1.0
- Precision: 0.8061
- Recall: 0.8386
- F1: 0.8221
- Accuracy: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Dnamutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:--------------:|:------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 259 | 0.0273 | 0.8072 | 0.5762 | 0.975 | 0.6685 | 0.7580 | 0.7104 | 0.9924 |
| 0.0597 | 2.0 | 518 | 0.0260 | 0.8148 | 0.6864 | 0.9873 | 0.7363 | 0.8004 | 0.7670 | 0.9936 |
| 0.0597 | 3.0 | 777 | 0.0338 | 0.8252 | 0.7221 | 1.0 | 0.7857 | 0.7941 | 0.7899 | 0.9935 |
| 0.0046 | 4.0 | 1036 | 0.0299 | 0.8707 | 0.7214 | 0.9873 | 0.7773 | 0.8450 | 0.8098 | 0.9941 |
| 0.0046 | 5.0 | 1295 | 0.0353 | 0.9035 | 0.7364 | 0.9873 | 0.8130 | 0.8493 | 0.8307 | 0.9941 |
| 0.0014 | 6.0 | 1554 | 0.0361 | 0.8941 | 0.7391 | 0.9873 | 0.8093 | 0.8471 | 0.8278 | 0.9941 |
| 0.0014 | 7.0 | 1813 | 0.0367 | 0.8957 | 0.7249 | 1.0 | 0.8090 | 0.8365 | 0.8225 | 0.9940 |
| 0.0004 | 8.0 | 2072 | 0.0381 | 0.8714 | 0.7578 | 1.0 | 0.8266 | 0.8301 | 0.8284 | 0.9940 |
| 0.0004 | 9.0 | 2331 | 0.0380 | 0.8732 | 0.7550 | 1.0 | 0.8148 | 0.8408 | 0.8276 | 0.9942 |
| 0.0002 | 10.0 | 2590 | 0.0380 | 0.8631 | 0.7522 | 1.0 | 0.8061 | 0.8386 | 0.8221 | 0.9942 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
gaunernst/bert-tiny-uncased | 0408b8940342cd18ff1d59ed698a17597aac2319 | 2022-07-02T03:02:15.000Z | [
"pytorch",
"bert",
"transformers",
"license:apache-2.0"
] | null | false | gaunernst | null | gaunernst/bert-tiny-uncased | 16 | null | transformers | 9,393 | ---
license: apache-2.0
---
|
infinix/Sheldon-bot | f069ccf3c5bb41672973d37473faf001f16f66f0 | 2022-07-02T11:06:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | infinix | null | infinix/Sheldon-bot | 16 | null | transformers | 9,394 | ---
tags:
- conversational
---
# Sheldon Model |
Aktsvigun/bart-base_xsum_23419 | fe2b96ed381defc1de1191dc68f7a2b31cb7526d | 2022-07-07T14:37:15.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_xsum_23419 | 16 | null | transformers | 9,395 | Entry not found |
tau/spider-trivia-ctx-encoder | 7fba5bf0ebcf9e978b7b1b38119a3643447dc135 | 2022-07-04T07:03:47.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | tau | null | tau/spider-trivia-ctx-encoder | 16 | null | transformers | 9,396 | Entry not found |
Sedigh/RoBERTa-large-PM-M3-Voc | 97d264a00cbdca14ab0b247a11e6be069cf308e2 | 2022-07-06T09:22:41.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:cc"
] | text-classification | false | Sedigh | null | Sedigh/RoBERTa-large-PM-M3-Voc | 16 | null | transformers | 9,397 | |
naver/efficient-splade-VI-BT-large-doc | 86552fafb2aa3380e335b8fd63c4a5afafc0639e | 2022-07-08T13:12:18.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:ms_marco",
"transformers",
"splade",
"query-expansion",
"document-expansion",
"bag-of-words",
"passage-retrieval",
"knowledge-distillation",
"document encoder",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | naver | null | naver/efficient-splade-VI-BT-large-doc | 16 | null | transformers | 9,398 | ---
license: cc-by-nc-sa-4.0
language: "en"
tags:
- splade
- query-expansion
- document-expansion
- bag-of-words
- passage-retrieval
- knowledge-distillation
- document encoder
datasets:
- ms_marco
---
## Efficient SPLADE
Efficient SPLADE model for passage retrieval. This architecture uses two distinct models for query and document inference. This is the **doc** one, please also download the **query** one (https://huggingface.co/naver/efficient-splade-VI-BT-large-query). For additional details, please visit:
* paper: https://dl.acm.org/doi/10.1145/3477495.3531833
* code: https://github.com/naver/splade
| | MRR@10 (MS MARCO dev) | R@1000 (MS MARCO dev) | Latency (PISA) ms | Latency (Inference) ms
| --- | --- | --- | --- | --- |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
## Citation
If you use our checkpoint, please cite our work:
```
@inproceedings{10.1145/3477495.3531833,
author = {Lassance, Carlos and Clinchant, St\'{e}phane},
title = {An Efficiency Study for SPLADE Models},
year = {2022},
isbn = {9781450387323},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531833},
doi = {10.1145/3477495.3531833},
abstract = {Latency and efficiency issues are often overlooked when evaluating IR models based on Pretrained Language Models (PLMs) in reason of multiple hardware and software testing scenarios. Nevertheless, efficiency is an important part of such systems and should not be overlooked. In this paper, we focus on improving the efficiency of the SPLADE model since it has achieved state-of-the-art zero-shot performance and competitive results on TREC collections. SPLADE efficiency can be controlled via a regularization factor, but solely controlling this regularization has been shown to not be efficient enough. In order to reduce the latency gap between SPLADE and traditional retrieval systems, we propose several techniques including L1 regularization for queries, a separation of document/query encoders, a FLOPS-regularized middle-training, and the use of faster query encoders. Our benchmark demonstrates that we can drastically improve the efficiency of these models while increasing the performance metrics on in-domain data. To our knowledge, we propose the first neural models that, under the same computing constraints, achieve similar latency (less than 4ms difference) as traditional BM25, while having similar performance (less than 10% MRR@10 reduction) as the state-of-the-art single-stage neural rankers on in-domain data.},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {2220–2226},
numpages = {7},
keywords = {splade, latency, information retrieval, sparse representations},
location = {Madrid, Spain},
series = {SIGIR '22}
}
```
|
emilys/twitter-roberta-base-dec2021-WNUT | 130ab6a1404e58f517b6a76beaa309d2a8b771c4 | 2022-07-05T22:26:37.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wnut_17",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | emilys | null | emilys/twitter-roberta-base-dec2021-WNUT | 16 | null | transformers | 9,399 | ---
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter-roberta-base-dec2021-WNUT
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.7111716621253406
- name: Recall
type: recall
value: 0.6244019138755981
- name: F1
type: f1
value: 0.664968152866242
- name: Accuracy
type: accuracy
value: 0.9642789042140724
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-dec2021-WNUT
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2152
- Precision: 0.7112
- Recall: 0.6244
- F1: 0.6650
- Accuracy: 0.9643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.46 | 25 | 0.2818 | 0.0982 | 0.0383 | 0.0551 | 0.9241 |
| No log | 0.93 | 50 | 0.2158 | 0.6181 | 0.4569 | 0.5254 | 0.9480 |
| No log | 1.39 | 75 | 0.1930 | 0.6682 | 0.5347 | 0.5940 | 0.9555 |
| No log | 1.85 | 100 | 0.1728 | 0.6583 | 0.5646 | 0.6079 | 0.9594 |
| No log | 2.31 | 125 | 0.1787 | 0.7050 | 0.5718 | 0.6314 | 0.9619 |
| No log | 2.78 | 150 | 0.2051 | 0.6979 | 0.5251 | 0.5993 | 0.9587 |
| No log | 3.24 | 175 | 0.1755 | 0.7172 | 0.5945 | 0.6501 | 0.9621 |
| No log | 3.7 | 200 | 0.1720 | 0.6943 | 0.6304 | 0.6608 | 0.9645 |
| No log | 4.17 | 225 | 0.1873 | 0.7203 | 0.6316 | 0.6730 | 0.9646 |
| No log | 4.63 | 250 | 0.1781 | 0.6934 | 0.6196 | 0.6545 | 0.9638 |
| No log | 5.09 | 275 | 0.1953 | 0.7040 | 0.6172 | 0.6577 | 0.9631 |
| No log | 5.56 | 300 | 0.1953 | 0.7223 | 0.6316 | 0.6739 | 0.9642 |
| No log | 6.02 | 325 | 0.1839 | 0.7008 | 0.6471 | 0.6729 | 0.9648 |
| No log | 6.48 | 350 | 0.1995 | 0.716 | 0.6423 | 0.6772 | 0.9650 |
| No log | 6.94 | 375 | 0.2056 | 0.7251 | 0.6184 | 0.6675 | 0.9640 |
| No log | 7.41 | 400 | 0.2044 | 0.7065 | 0.6220 | 0.6616 | 0.9640 |
| No log | 7.87 | 425 | 0.2042 | 0.7201 | 0.6400 | 0.6776 | 0.9650 |
| No log | 8.33 | 450 | 0.2247 | 0.7280 | 0.6244 | 0.6722 | 0.9638 |
| No log | 8.8 | 475 | 0.2060 | 0.7064 | 0.6447 | 0.6742 | 0.9649 |
| 0.0675 | 9.26 | 500 | 0.2152 | 0.7112 | 0.6244 | 0.6650 | 0.9643 |
| 0.0675 | 9.72 | 525 | 0.2086 | 0.7070 | 0.6495 | 0.6771 | 0.9650 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.