modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
reichenbach/wav2vec2-large-xls-r-300m-hi | a09b78a09f2201598d2611a4ac7f9e8aae39e432 | 2022-03-23T18:27:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | reichenbach | null | reichenbach/wav2vec2-large-xls-r-300m-hi | 8 | 1 | transformers | 13,200 | ---
license: apache-2.0
language:
- hi
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4749
- Wer: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.8626 | 4.76 | 400 | 3.6151 | 1.0 |
| 3.5463 | 9.52 | 800 | 3.5778 | 1.0 |
| 3.4415 | 14.28 | 1200 | 3.4525 | 1.0 |
| 3.0927 | 19.05 | 1600 | 2.6220 | 0.9860 |
| 2.0573 | 23.8 | 2000 | 2.3974 | 0.9610 |
| 1.5905 | 28.57 | 2400 | 2.4427 | 0.9558 |
| 1.426 | 33.33 | 2800 | 2.4736 | 0.9475 |
| 1.3147 | 38.09 | 3200 | 2.4494 | 0.9417 |
| 1.2642 | 42.85 | 3600 | 2.4665 | 0.9450 |
| 1.2289 | 47.62 | 4000 | 2.4749 | 0.9420 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
rg089/t5-headline-generation | 3066826aa28087b9c9b6b40a69b13fdbeaa15615 | 2021-11-27T19:22:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | rg089 | null | rg089/t5-headline-generation | 8 | null | transformers | 13,201 | Entry not found |
sammy786/wav2vec2-xlsr-lithuanian | 184be031ec84ea1c93ef0d2394f879c22533f411 | 2022-03-24T11:49:34.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | sammy786 | null | sammy786/wav2vec2-xlsr-lithuanian | 8 | null | transformers | 13,202 | ---
language:
- lt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- lt
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-lithuanian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: lt
metrics:
- name: Test WER
type: wer
value: 14.67
- name: Test CER
type: cer
value: 2.77
---
# sammy786/wav2vec2-xlsr-lithuanian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - lt dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 13.1811
- Wer: 24.2570
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:-----:|:-------------:|:---------------:|:--------:|
| 200 | 5.718700 | 2.897032 | 1.000000 |
| 400 | 1.340000 | 0.309548 | 0.507284 |
| 600 | 0.799100 | 0.220205 | 0.402098 |
| 800 | 0.494400 | 0.185093 | 0.352855 |
| 1000 | 0.370800 | 0.165869 | 0.334207 |
| 1200 | 0.312500 | 0.159801 | 0.324009 |
| 1400 | 0.276100 | 0.148066 | 0.321678 |
| 1600 | 0.250100 | 0.153748 | 0.311626 |
| 1800 | 0.226400 | 0.147437 | 0.302885 |
| 2000 | 0.206900 | 0.141176 | 0.296037 |
| 2200 | 0.189900 | 0.142161 | 0.288170 |
| 2400 | 0.192100 | 0.138029 | 0.286568 |
| 2600 | 0.175600 | 0.139496 | 0.283654 |
| 2800 | 0.156900 | 0.138609 | 0.283217 |
| 3000 | 0.149400 | 0.140468 | 0.281906 |
| 3200 | 0.144600 | 0.132472 | 0.278263 |
| 3400 | 0.144100 | 0.141028 | 0.277535 |
| 3600 | 0.133000 | 0.134287 | 0.275495 |
| 3800 | 0.126600 | 0.149136 | 0.277681 |
| 4000 | 0.123500 | 0.132180 | 0.266463 |
| 4200 | 0.113000 | 0.137942 | 0.268211 |
| 4400 | 0.111700 | 0.140038 | 0.272873 |
| 4600 | 0.108600 | 0.136756 | 0.264132 |
| 4800 | 0.103600 | 0.137541 | 0.263403 |
| 5000 | 0.098000 | 0.140435 | 0.264860 |
| 5200 | 0.095800 | 0.136950 | 0.262383 |
| 5400 | 0.094000 | 0.128214 | 0.263986 |
| 5600 | 0.085300 | 0.125024 | 0.259761 |
| 5800 | 0.078900 | 0.128575 | 0.260198 |
| 6000 | 0.083300 | 0.135496 | 0.258887 |
| 6200 | 0.078800 | 0.131706 | 0.259178 |
| 6400 | 0.073800 | 0.128451 | 0.255390 |
| 6600 | 0.072600 | 0.131245 | 0.252768 |
| 6800 | 0.073300 | 0.131525 | 0.249417 |
| 7000 | 0.069000 | 0.128627 | 0.255536 |
| 7200 | 0.064400 | 0.127767 | 0.250583 |
| 7400 | 0.065400 | 0.129557 | 0.247815 |
| 7600 | 0.061200 | 0.129734 | 0.250146 |
| 7800 | 0.059100 | 0.135124 | 0.249709 |
| 8000 | 0.057000 | 0.132850 | 0.249126 |
| 8200 | 0.056100 | 0.128827 | 0.248252 |
| 8400 | 0.056400 | 0.130229 | 0.246795 |
| 8600 | 0.052800 | 0.128939 | 0.245775 |
| 8800 | 0.051100 | 0.131892 | 0.248543 |
| 9000 | 0.052900 | 0.132062 | 0.244464 |
| 9200 | 0.048200 | 0.130988 | 0.244172 |
| 9400 | 0.047700 | 0.131811 | 0.242570 |
| 9600 | 0.050000 | 0.133832 | 0.245484 |
| 9800 | 0.047500 | 0.134340 | 0.243881 |
| 10000 | 0.048400 | 0.133388 | 0.243590 |
| 10200 | 0.047800 | 0.132729 | 0.244464 |
| 10400 | 0.049000 | 0.131695 | 0.245047 |
| 10600 | 0.044400 | 0.132154 | 0.245484 |
| 10800 | 0.050100 | 0.131575 | 0.245192 |
| 11000 | 0.047700 | 0.131211 | 0.245192 |
| 11200 | 0.046000 | 0.131293 | 0.245047 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-lithuanian --dataset mozilla-foundation/common_voice_8_0 --config lt --split test
``` |
sana-ngu/HaT5_augmentation | ccee857037f29e5163785a587bef64ba7f917e4e | 2022-05-20T06:18:54.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2202.05690",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sana-ngu | null | sana-ngu/HaT5_augmentation | 8 | null | transformers | 13,203 | ### HaT5(T5-base)
This is a fine-tuned model of T5 (base) on the hate speech detection dataset. It is intended to be used as a classification model for identifying Tweets (0 - HOF(hate/offensive); 1 - NOT). The task prefix we used for the T5 model is 'classification: '.
More information about the original pre-trained model can be found [here](https://huggingface.co/t5-base)
Classification examples:
|Prediction|Tweet|
|-----|--------|
|0 |Why the fuck I got over 1000 views on my story 😂😂 nothing new over here |
|1. |first of all there is no vaccine to cure , whthr it is capsules, tablets, or injections, they just support to fight with d virus. I do not support people taking any kind of home remedies n making fun of an ayurvedic medicine..😐 |
# More Details
for more details about the datasets and eval results, see [our paper here](https://arxiv.org/abs/2202.05690)
# How to use
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
model = T5ForConditionalGeneration.from_pretrained("sana-ngu/HaT5_augmentation ")
tokenizer = T5Tokenizer.from_pretrained("t5-base")
tokenizer.pad_token = tokenizer.eos_token
input_ids = tokenizer("Old lions in the wild lay down and die with dignity when they can't hunt anymore. If a government is having 'teething problems' handling aid supplies one full year into a pandemic, maybe it should take a cue and get the fuck out of the way? ", padding=True, truncation=True, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
pred = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(pred)
``` |
sarasarasara/sara-model | 58c5701270db618cf65e23dd75b189627605c0dc | 2021-08-06T10:57:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | sarasarasara | null | sarasarasara/sara-model | 8 | null | transformers | 13,204 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.984018301110458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9288
- Recall: 0.9374
- F1: 0.9331
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0694 | 0.9126 | 0.9179 | 0.9152 | 0.9807 |
| 0.0522 | 2.0 | 1756 | 0.0604 | 0.9207 | 0.9342 | 0.9274 | 0.9833 |
| 0.0308 | 3.0 | 2634 | 0.0614 | 0.9288 | 0.9374 | 0.9331 | 0.9840 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
seduerr/splitter | 4eeeab3a2ee04b44efb195823c86d9b9a242c923 | 2021-06-02T14:56:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | seduerr | null | seduerr/splitter | 8 | null | transformers | 13,205 | Entry not found |
sivasankalpp/dpr-multidoc2dial-token-ctx-encoder | 666732a40649376732162401dbb944cbe818813e | 2021-11-10T20:14:46.000Z | [
"pytorch",
"dpr",
"transformers"
]
| null | false | sivasankalpp | null | sivasankalpp/dpr-multidoc2dial-token-ctx-encoder | 8 | null | transformers | 13,206 | Entry not found |
skplanet/dialog-koelectra-small-generator | 5145d382be0f91dc9858a35b57bdbd0c040adfca | 2021-04-13T01:15:45.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | skplanet | null | skplanet/dialog-koelectra-small-generator | 8 | null | transformers | 13,207 | # Dialog-KoELECTRA
Github : [https://github.com/skplanet/Dialog-KoELECTRA](https://github.com/skplanet/Dialog-KoELECTRA)
## Introduction
**Dialog-KoELECTRA** is a language model specialized for dialogue. It was trained with 22GB colloquial and written style Korean text data. Dialog-ELECTRA model is made based on the [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) model. ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU.
<br>
## Released Models
We are initially releasing small version pre-trained model.
The model was trained on Korean text. We hope to release other models, such as base/large models, in the future.
| Model | Layers | Hidden Size | Params | Max<br/>Seq Len | Learning<br/>Rate | Batch Size | Train Steps |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Dialog-KoELECTRA-Small | 12 | 256 | 14M | 128 | 1e-4 | 512 | 700K |
<br>
## Model Performance
Dialog-KoELECTRA shows strong performance in conversational downstream tasks.
| | **NSMC**<br/>(acc) | **Question Pair**<br/>(acc) | **Korean-Hate-Speech**<br/>(F1) | **Naver NER**<br/>(F1) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) |
| :--------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: |
| DistilKoBERT | 88.60 | 92.48 | 60.72 | 84.65 | 72.00 | 72.59 |
| **Dialog-KoELECTRA-Small** | **90.01** | **94.99** | **68.26** | **85.51** | **78.54** | **78.96** |
<br>
## Train Data
<table class="tg">
<thead>
<tr>
<th class="tg-c3ow"></th>
<th class="tg-c3ow">corpus name</th>
<th class="tg-c3ow">size</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-c3ow" rowspan="4">dialog</td>
<td class="tg-0pky"><a href="https://aihub.or.kr/aidata/85" target="_blank" rel="noopener noreferrer">Aihub Korean dialog corpus</a></td>
<td class="tg-c3ow" rowspan="4">7GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Spoken corpus</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/songys/Chatbot_data" target="_blank" rel="noopener noreferrer">Korean chatbot data</a></td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/Beomi/KcBERT" target="_blank" rel="noopener noreferrer">KcBERT</a></td>
</tr>
<tr>
<td class="tg-c3ow" rowspan="2">written</td>
<td class="tg-0pky"><a href="https://corpus.korean.go.kr/" target="_blank" rel="noopener noreferrer">NIKL Newspaper corpus</a></td>
<td class="tg-c3ow" rowspan="2">15GB</td>
</tr>
<tr>
<td class="tg-0pky"><a href="https://github.com/lovit/namuwikitext" target="_blank" rel="noopener noreferrer">namuwikitext</a></td>
</tr>
</tbody>
</table>
<br>
## Vocabulary
We applied morpheme analysis using [huggingface_konlpy](https://github.com/lovit/huggingface_konlpy) when creating a vocabulary dictionary.
As a result of the experiment, it showed better performance than a vocabulary dictionary created without applying morpheme analysis.
<table>
<thead>
<tr>
<th>vocabulary size</th>
<th>unused token size</th>
<th>limit alphabet</th>
<th>min frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>40,000</td>
<td>500</td>
<td>6,000</td>
<td>3</td>
</tr>
</tbody>
</table>
<br>
|
smangrul/xls-r-mr-model | 86f58486bd43e5b50c96662d0ccb43945f4bd64a | 2022-03-24T11:54:20.000Z | [
"pytorch",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:openslr",
"dataset:shivam/marathi_samanantar_processed",
"dataset:shivam/marathi_pib_processed",
"dataset:opus100",
"dataset:tatoeba",
"dataset:tapaco",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"openslr",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | smangrul | null | smangrul/xls-r-mr-model | 8 | 1 | null | 13,208 | ---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- openslr
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
- openslr
- shivam/marathi_samanantar_processed
- shivam/marathi_pib_processed
- opus100
- tatoeba
- tapaco
model-index:
- name: wav2vec2-large-xls-r-300m-mr
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mr
metrics:
- type: wer
value: 31.05
name: Test WER
- name: Test CER
type: cer
value: 6.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR and OPENSLR - SLR64 - MR datasets.
It achieves the following results on the evaluation set:
- Loss: 0.494580
- Wer: 0.401524
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM |
|---|---|
| 40.513437625350984 | 31.04693140794224 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|---|---|---|---|
| 400 | 3.794000 | 3.532227 | 1.000000 |
| 800 | 3.362400 | 3.359044 | 1.000000 |
| 1200 | 2.293900 | 1.011279 | 0.829924 |
| 1600 | 1.233000 | 0.502743 | 0.593662 |
| 2000 | 0.962600 | 0.412519 | 0.496992 |
| 2400 | 0.831800 | 0.402903 | 0.493783 |
| 2800 | 0.737000 | 0.389773 | 0.469314 |
| 3200 | 0.677100 | 0.373987 | 0.436021 |
| 3600 | 0.634400 | 0.383823 | 0.432010 |
| 4000 | 0.586000 | 0.375610 | 0.419575 |
| 4400 | 0.561000 | 0.387891 | 0.418371 |
| 4800 | 0.518500 | 0.386357 | 0.417569 |
| 5200 | 0.515300 | 0.415069 | 0.430004 |
| 5600 | 0.478100 | 0.399211 | 0.408744 |
| 6000 | 0.468100 | 0.424542 | 0.402327 |
| 6400 | 0.439400 | 0.430979 | 0.410750 |
| 6800 | 0.429600 | 0.427700 | 0.409146 |
| 7200 | 0.400300 | 0.451111 | 0.419976 |
| 7600 | 0.395100 | 0.463446 | 0.405134 |
| 8000 | 0.381800 | 0.454752 | 0.407942 |
| 8400 | 0.371500 | 0.461547 | 0.404733 |
| 8800 | 0.362500 | 0.461543 | 0.411151 |
| 9200 | 0.338200 | 0.468299 | 0.417168 |
| 9600 | 0.338800 | 0.480989 | 0.412355 |
| 10000 | 0.317600 | 0.475700 | 0.410750 |
| 10400 | 0.315100 | 0.478920 | 0.403530 |
| 10800 | 0.296200 | 0.480600 | 0.398315 |
| 11200 | 0.299000 | 0.477083 | 0.393502 |
| 11600 | 0.290000 | 0.465646 | 0.393903 |
| 12000 | 0.290900 | 0.490041 | 0.405937 |
| 12400 | 0.275600 | 0.489354 | 0.399519 |
| 12800 | 0.272600 | 0.494580 | 0.395909 |
| 13200 | 0.265900 | 0.497918 | 0.397112 |
| 13600 | 0.266300 | 0.498627 | 0.397513 |
| 14000 | 0.259600 | 0.504610 | 0.401524 |
#### Evaluation Commands
To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id smangrul/xls-r-mr-model --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
soheeyang/dpr-question_encoder-single-trivia-base | 986b64d504b3389a30f876d0e248c35a50675939 | 2021-04-15T14:48:08.000Z | [
"pytorch",
"tf",
"dpr",
"feature-extraction",
"arxiv:2004.04906",
"transformers"
]
| feature-extraction | false | soheeyang | null | soheeyang/dpr-question_encoder-single-trivia-base | 8 | null | transformers | 13,209 | # DPRQuestionEncoder for TriviaQA
## dpr-question_encoder-single-trivia-base
Dense Passage Retrieval (`DPR`)
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih, [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906), EMNLP 2020.
This model is the question encoder of DPR trained solely on TriviaQA (single-trivia) using the [official implementation of DPR](https://github.com/facebookresearch/DPR).
Disclaimer: This model is not from the authors of DPR, but my reproduction. The authors did not release the DPR weights trained solely on TriviaQA. I hope this model checkpoint can be helpful for those who want to use DPR trained only on TriviaQA.
## Performance
The following is the answer recall rate measured using PyTorch 1.4.0 and transformers 4.5.0.
The values in parentheses are those reported in the paper.
| Top-K Passages | TriviaQA Dev | TriviaQA Test |
|----------------|--------------|---------------|
| 1 | 54.27 | 54.41 |
| 5 | 71.11 | 70.99 |
| 20 | 79.53 | 79.31 (79.4) |
| 50 | 82.72 | 82.99 |
| 100 | 85.07 | 84.99 (85.0) |
## How to Use
Using `AutoModel` does not properly detect whether the checkpoint is for `DPRContextEncoder` or `DPRQuestionEncoder`.
Therefore, please specify the exact class to use the model.
```python
from transformers import DPRQuestionEncoder, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("soheeyang/dpr-question_encoder-single-trivia-base")
question_encoder = DPRQuestionEncoder.from_pretrained("soheeyang/dpr-question_encoder-single-trivia-base")
data = tokenizer("question comes here", return_tensors="pt")
question_embedding = question_encoder(**data).pooler_output # embedding vector for question
```
|
speech-seq2seq/wav2vec2-2-gpt2-medium | 1bab072c90b992c640bb6bb111eb0330f70a8f8e | 2022-02-11T22:26:54.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | speech-seq2seq | null | speech-seq2seq/wav2vec2-2-gpt2-medium | 8 | null | transformers | 13,210 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5264
- Wer: 1.7073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4032 | 0.28 | 500 | 4.6724 | 1.9406 |
| 4.6417 | 0.56 | 1000 | 4.7143 | 1.8874 |
| 4.5725 | 0.84 | 1500 | 4.6413 | 1.9451 |
| 4.0178 | 1.12 | 2000 | 4.5470 | 1.8861 |
| 3.9084 | 1.4 | 2500 | 4.4360 | 1.8881 |
| 3.9297 | 1.68 | 3000 | 4.2814 | 1.8652 |
| 3.707 | 1.96 | 3500 | 4.1035 | 1.8320 |
| 3.1373 | 2.24 | 4000 | 3.9557 | 1.7762 |
| 3.3152 | 2.52 | 4500 | 3.7737 | 1.7454 |
| 2.9501 | 2.8 | 5000 | 3.5264 | 1.7073 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
srosy/distilbert-base-uncased-finetuned-emotion | fc2b2284930d08064e36b20c27a9789076c2f37a | 2022-02-13T09:39:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | srosy | null | srosy/distilbert-base-uncased-finetuned-emotion | 8 | 1 | transformers | 13,211 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.939
- name: F1
type: f1
value: 0.9391566069722169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.939
- F1: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4977 | 1.0 | 1000 | 0.1919 | 0.9255 | 0.9253 |
| 0.1545 | 2.0 | 2000 | 0.1582 | 0.939 | 0.9392 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
staceythompson/autonlp-myclassification-fortext-16332728 | d7791adf82caf89c77576ee0a3f0c058ccec302f | 2021-10-10T00:24:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:staceythompson/autonlp-data-myclassification-fortext",
"transformers",
"autonlp"
]
| text-classification | false | staceythompson | null | staceythompson/autonlp-myclassification-fortext-16332728 | 8 | null | transformers | 13,212 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- staceythompson/autonlp-data-myclassification-fortext
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 16332728
## Validation Metrics
- Loss: 0.08077391237020493
- Accuracy: 0.9846153846153847
- Macro F1: 0.9900793650793651
- Micro F1: 0.9846153846153847
- Weighted F1: 0.9846153846153847
- Macro Precision: 0.9900793650793651
- Micro Precision: 0.9846153846153847
- Weighted Precision: 0.9846153846153847
- Macro Recall: 0.9900793650793651
- Micro Recall: 0.9846153846153847
- Weighted Recall: 0.9846153846153847
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-myclassification-fortext-16332728
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
stefan-jo/bert-finetuned-ner | 1a8ba7cff2c8c957fb18775b138480f4bbb61af9 | 2022-01-02T13:21:28.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | stefan-jo | null | stefan-jo/bert-finetuned-ner | 8 | null | transformers | 13,213 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9378727634194831
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9452329270328937
- name: Accuracy
type: accuracy
value: 0.9866515570730559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9379
- Recall: 0.9527
- F1: 0.9452
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.088 | 1.0 | 1756 | 0.0625 | 0.9203 | 0.9399 | 0.9300 | 0.9835 |
| 0.0383 | 2.0 | 3512 | 0.0614 | 0.9348 | 0.9460 | 0.9404 | 0.9858 |
| 0.0209 | 3.0 | 5268 | 0.0619 | 0.9379 | 0.9527 | 0.9452 | 0.9867 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
superspray/electra_large_discriminator_squad2_custom_dataset | aa57c6264af27486fbf1462c3895f716a25daf1f | 2021-02-20T07:00:12.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | superspray | null | superspray/electra_large_discriminator_squad2_custom_dataset | 8 | null | transformers | 13,214 | # Question & Answering Model for 'Save Your Minutes' from Dobby-AI
Electra_Large Discriminator fine-tuned on SQuAD2.0 and custom QA dataset
This model is [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512/blob/main/README.md)
trained on additional custom dataset as:
```
!python3 run_squad.py --model_type electra \
--model_name_or_path /content/electra_large_512 \
--do_lower_case \
--output_dir /content/model/\
--do_train \
--train_file $data_dir/additional_qa.json\
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--per_gpu_train_batch_size 4
```
We used Google Colab for training the model, |
tals/albert-xlarge-vitaminc | e5b932a3960d8c5a0fda21c80ac7c4fc5dbd4553 | 2022-06-22T23:55:28.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"transformers"
]
| text-classification | false | tals | null | tals/albert-xlarge-vitaminc | 8 | null | transformers | 13,215 | ---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
textattack/xlnet-base-cased-CoLA | 00b24d3c31db3a9bae2a03372b73ae5aa4bd7f70 | 2020-07-06T16:29:34.000Z | [
"pytorch",
"xlnet",
"text-generation",
"transformers"
]
| text-generation | false | textattack | null | textattack/xlnet-base-cased-CoLA | 8 | null | transformers | 13,216 | ## TextAttack Model Cardfor 5 epochs with a batch size of 32, a learning
rate of 3e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.7976989453499521, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
tiennvcs/layoutlmv2-base-uncased-finetuned-vi-infovqa | 28b049c3ec157b0890bafb74f332690553358531 | 2021-12-27T14:23:33.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | tiennvcs | null | tiennvcs/layoutlmv2-base-uncased-finetuned-vi-infovqa | 8 | null | transformers | 13,217 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.33 | 100 | 5.3461 |
| No log | 0.66 | 200 | 4.9734 |
| No log | 0.99 | 300 | 4.6074 |
| No log | 1.32 | 400 | 4.4548 |
| 4.6355 | 1.65 | 500 | 4.3831 |
| 4.6355 | 1.98 | 600 | 4.3332 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu101
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small | e86214cd250a151647e533c25ce3d8c2e79e1471 | 2022-01-30T17:23:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | tomascufaro | null | tomascufaro/wav2vec2-large-xls-r-300m-spanish-small | 8 | null | transformers | 13,218 | ---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3763
- Wer: 0.1791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2277 | 0.26 | 400 | 0.2601 | 0.2291 |
| 0.2932 | 0.53 | 800 | 0.2950 | 0.2670 |
| 0.3019 | 0.79 | 1200 | 0.3247 | 0.2766 |
| 0.2987 | 1.05 | 1600 | 0.3031 | 0.2606 |
| 0.261 | 1.32 | 2000 | 0.2994 | 0.2620 |
| 0.2651 | 1.58 | 2400 | 0.3134 | 0.2700 |
| 0.264 | 1.85 | 2800 | 0.3016 | 0.2641 |
| 0.2475 | 2.11 | 3200 | 0.3135 | 0.2661 |
| 0.2269 | 2.37 | 3600 | 0.3029 | 0.2562 |
| 0.2389 | 2.64 | 4000 | 0.3035 | 0.2549 |
| 0.2319 | 2.9 | 4400 | 0.3022 | 0.2551 |
| 0.2123 | 3.16 | 4800 | 0.3256 | 0.2638 |
| 0.2094 | 3.43 | 5200 | 0.3227 | 0.2712 |
| 0.2121 | 3.69 | 5600 | 0.3085 | 0.2596 |
| 0.207 | 3.96 | 6000 | 0.3041 | 0.2597 |
| 0.1809 | 4.22 | 6400 | 0.3122 | 0.2524 |
| 0.1846 | 4.48 | 6800 | 0.3254 | 0.2579 |
| 0.1885 | 4.75 | 7200 | 0.2958 | 0.2437 |
| 0.1923 | 5.01 | 7600 | 0.3136 | 0.2502 |
| 0.1626 | 5.27 | 8000 | 0.3059 | 0.2488 |
| 0.1704 | 5.54 | 8400 | 0.3082 | 0.2515 |
| 0.1674 | 5.8 | 8800 | 0.3196 | 0.2509 |
| 0.1691 | 6.06 | 9200 | 0.3193 | 0.25 |
| 0.1499 | 6.33 | 9600 | 0.3529 | 0.2635 |
| 0.1568 | 6.59 | 10000 | 0.3241 | 0.2481 |
| 0.1538 | 6.86 | 10400 | 0.3354 | 0.2476 |
| 0.1503 | 7.12 | 10800 | 0.3180 | 0.2402 |
| 0.136 | 7.38 | 11200 | 0.3230 | 0.2397 |
| 0.1413 | 7.65 | 11600 | 0.3178 | 0.2451 |
| 0.147 | 7.91 | 12000 | 0.3170 | 0.2389 |
| 0.1341 | 8.17 | 12400 | 0.3380 | 0.2501 |
| 0.1329 | 8.44 | 12800 | 0.3265 | 0.2414 |
| 0.1314 | 8.7 | 13200 | 0.3281 | 0.2482 |
| 0.1312 | 8.97 | 13600 | 0.3259 | 0.2539 |
| 0.12 | 9.23 | 14000 | 0.3291 | 0.2424 |
| 0.1193 | 9.49 | 14400 | 0.3302 | 0.2412 |
| 0.1189 | 9.76 | 14800 | 0.3376 | 0.2407 |
| 0.1217 | 10.02 | 15200 | 0.3334 | 0.2400 |
| 0.1118 | 10.28 | 15600 | 0.3359 | 0.2368 |
| 0.1139 | 10.55 | 16000 | 0.3239 | 0.2335 |
| 0.1106 | 10.81 | 16400 | 0.3374 | 0.2352 |
| 0.1081 | 11.07 | 16800 | 0.3585 | 0.2434 |
| 0.1063 | 11.34 | 17200 | 0.3639 | 0.2472 |
| 0.1041 | 11.6 | 17600 | 0.3399 | 0.2423 |
| 0.1062 | 11.87 | 18000 | 0.3410 | 0.2388 |
| 0.1012 | 12.13 | 18400 | 0.3597 | 0.2413 |
| 0.0953 | 12.39 | 18800 | 0.3440 | 0.2296 |
| 0.097 | 12.66 | 19200 | 0.3440 | 0.2269 |
| 0.0968 | 12.92 | 19600 | 0.3498 | 0.2333 |
| 0.0902 | 13.18 | 20000 | 0.3471 | 0.2290 |
| 0.0868 | 13.45 | 20400 | 0.3462 | 0.2266 |
| 0.0892 | 13.71 | 20800 | 0.3373 | 0.2227 |
| 0.0902 | 13.97 | 21200 | 0.3377 | 0.2240 |
| 0.0846 | 14.24 | 21600 | 0.3484 | 0.2237 |
| 0.0839 | 14.5 | 22000 | 0.3706 | 0.2260 |
| 0.0834 | 14.77 | 22400 | 0.3430 | 0.2268 |
| 0.0841 | 15.03 | 22800 | 0.3489 | 0.2259 |
| 0.076 | 15.29 | 23200 | 0.3626 | 0.2281 |
| 0.0771 | 15.56 | 23600 | 0.3624 | 0.2268 |
| 0.0773 | 15.82 | 24000 | 0.3440 | 0.2252 |
| 0.0759 | 16.08 | 24400 | 0.3532 | 0.2170 |
| 0.0745 | 16.35 | 24800 | 0.3686 | 0.2188 |
| 0.0713 | 16.61 | 25200 | 0.3691 | 0.2195 |
| 0.0718 | 16.88 | 25600 | 0.3470 | 0.2108 |
| 0.0685 | 17.14 | 26000 | 0.3756 | 0.2179 |
| 0.0689 | 17.4 | 26400 | 0.3542 | 0.2149 |
| 0.0671 | 17.67 | 26800 | 0.3461 | 0.2165 |
| 0.0737 | 17.93 | 27200 | 0.3473 | 0.2238 |
| 0.0669 | 18.19 | 27600 | 0.3441 | 0.2138 |
| 0.0629 | 18.46 | 28000 | 0.3721 | 0.2155 |
| 0.0632 | 18.72 | 28400 | 0.3667 | 0.2126 |
| 0.0647 | 18.98 | 28800 | 0.3579 | 0.2097 |
| 0.0603 | 19.25 | 29200 | 0.3670 | 0.2130 |
| 0.0604 | 19.51 | 29600 | 0.3750 | 0.2142 |
| 0.0619 | 19.78 | 30000 | 0.3804 | 0.2160 |
| 0.0603 | 20.04 | 30400 | 0.3764 | 0.2124 |
| 0.0577 | 20.3 | 30800 | 0.3858 | 0.2097 |
| 0.0583 | 20.57 | 31200 | 0.3520 | 0.2089 |
| 0.0561 | 20.83 | 31600 | 0.3615 | 0.2079 |
| 0.0545 | 21.09 | 32000 | 0.3824 | 0.2032 |
| 0.0525 | 21.36 | 32400 | 0.3858 | 0.2091 |
| 0.0524 | 21.62 | 32800 | 0.3956 | 0.2099 |
| 0.0527 | 21.89 | 33200 | 0.3667 | 0.2025 |
| 0.0514 | 22.15 | 33600 | 0.3708 | 0.2032 |
| 0.0506 | 22.41 | 34000 | 0.3815 | 0.2053 |
| 0.0478 | 22.68 | 34400 | 0.3671 | 0.2007 |
| 0.049 | 22.94 | 34800 | 0.3758 | 0.2003 |
| 0.0477 | 23.2 | 35200 | 0.3786 | 0.2014 |
| 0.045 | 23.47 | 35600 | 0.3732 | 0.1998 |
| 0.0426 | 23.73 | 36000 | 0.3737 | 0.2010 |
| 0.0444 | 23.99 | 36400 | 0.3600 | 0.1990 |
| 0.0433 | 24.26 | 36800 | 0.3689 | 0.1976 |
| 0.0442 | 24.52 | 37200 | 0.3787 | 0.1968 |
| 0.0419 | 24.79 | 37600 | 0.3652 | 0.1961 |
| 0.042 | 25.05 | 38000 | 0.3820 | 0.1964 |
| 0.0419 | 25.31 | 38400 | 0.3786 | 0.1919 |
| 0.0376 | 25.58 | 38800 | 0.3842 | 0.1934 |
| 0.0385 | 25.84 | 39200 | 0.3767 | 0.1900 |
| 0.0396 | 26.1 | 39600 | 0.3688 | 0.1888 |
| 0.0371 | 26.37 | 40000 | 0.3815 | 0.1894 |
| 0.0363 | 26.63 | 40400 | 0.3748 | 0.1878 |
| 0.0377 | 26.9 | 40800 | 0.3713 | 0.1852 |
| 0.0352 | 27.16 | 41200 | 0.3734 | 0.1851 |
| 0.0355 | 27.42 | 41600 | 0.3776 | 0.1874 |
| 0.0333 | 27.69 | 42000 | 0.3867 | 0.1841 |
| 0.0348 | 27.95 | 42400 | 0.3823 | 0.1839 |
| 0.0329 | 28.21 | 42800 | 0.3795 | 0.1822 |
| 0.0325 | 28.48 | 43200 | 0.3711 | 0.1813 |
| 0.0328 | 28.74 | 43600 | 0.3721 | 0.1781 |
| 0.0312 | 29.0 | 44000 | 0.3803 | 0.1816 |
| 0.0318 | 29.27 | 44400 | 0.3758 | 0.1794 |
| 0.0302 | 29.53 | 44800 | 0.3792 | 0.1784 |
| 0.0339 | 29.8 | 45200 | 0.3763 | 0.1791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
ts1829/obama_gpt2 | 0bf57333d76537a8d26298616a41d282a036a52a | 2021-05-23T13:13:35.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ts1829 | null | ts1829/obama_gpt2 | 8 | null | transformers | 13,219 | Entry not found |
ts1829/trump_gpt2 | 1d8c4e6bfa6485acf7ba13f4fb52cf24228e72fd | 2021-05-23T13:14:40.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | ts1829 | null | ts1829/trump_gpt2 | 8 | null | transformers | 13,220 | Entry not found |
turtlesoupy/forward-dictionary-model-v1 | c89dedbfec2416219b03a30620e0121574a3ff90 | 2021-05-23T13:15:50.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | turtlesoupy | null | turtlesoupy/forward-dictionary-model-v1 | 8 | null | transformers | 13,221 | Entry not found |
uclanlp/plbart-single_task-python_en | 5d744cd2681dd955218f2e54ff807e927276445d | 2022-03-02T06:58:51.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-single_task-python_en | 8 | null | transformers | 13,222 | Entry not found |
victen/distilbert-base-uncased-finetuned-emotion | 4b2b1152bb4a28976c7f97ba5751492e00f502cb | 2022-02-07T10:42:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | victen | null | victen/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,223 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236951195245434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 |
| 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
victor/autonlp-imdb-reviews-sentiment-329982 | 3cb89ea1c0fbe65d722e8cd912f6c79212d3178e | 2021-07-06T19:26:32.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:victor/autonlp-data-imdb-reviews-sentiment",
"transformers",
"autonlp"
]
| text-classification | false | victor | null | victor/autonlp-imdb-reviews-sentiment-329982 | 8 | null | transformers | 13,224 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- victor/autonlp-data-imdb-reviews-sentiment
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 329982
## Validation Metrics
- Loss: 0.24620144069194794
- Accuracy: 0.9300053431035799
- Precision: 0.9299029425358188
- Recall: 0.9289012003693444
- AUC: 0.9795001637755057
- F1: 0.9294018015243667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/victor/autonlp-imdb-reviews-sentiment-329982
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("victor/autonlp-imdb-reviews-sentiment-329982", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("victor/autonlp-imdb-reviews-sentiment-329982", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
victorswedspot/DialoGPT-small-gandalf | 05c70b93ffe7fb5e26084f0c1dc9a4f66537dc8c | 2021-08-30T12:11:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | victorswedspot | null | victorswedspot/DialoGPT-small-gandalf | 8 | null | transformers | 13,225 | ---
tags:
- conversational
---
# Gandalf DialoGPT model |
vidhur2k/mBERT-Hindi-Mono | 5230fc32f14bb9522c23d68c5e845da67f755d0a | 2021-12-04T03:59:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | vidhur2k | null | vidhur2k/mBERT-Hindi-Mono | 8 | null | transformers | 13,226 | Entry not found |
vkhangpham/shopee-ner | 18627bfcf4243e69c4efda61fc9f16f798df1956 | 2022-01-27T19:15:22.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | vkhangpham | null | vkhangpham/shopee-ner | 8 | null | transformers | 13,227 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: shopee-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shopee-ner
This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Precision: 0.7666
- Recall: 0.8666
- F1: 0.8135
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2282 | 1.0 | 33750 | 0.2174 | 0.7443 | 0.8506 | 0.7939 | 0.9253 |
| 0.1983 | 2.0 | 67500 | 0.2046 | 0.7666 | 0.8666 | 0.8135 | 0.9320 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
vladenisov/sports-antihate | 1a41542ead1523d5bcde2e80a50cc9f73d24ebf2 | 2022-02-15T20:49:13.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | vladenisov | null | vladenisov/sports-antihate | 8 | null | transformers | 13,228 | Entry not found |
w11wo/indonesian-roberta-base-indonli | a84d2153e606ea6038dbce9f4402c222e40fa5c6 | 2021-11-11T09:00:12.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"id",
"dataset:indonli",
"arxiv:1907.11692",
"arxiv:2110.14566",
"transformers",
"indonesian-roberta-base-indonli",
"license:mit"
]
| text-classification | false | w11wo | null | w11wo/indonesian-roberta-base-indonli | 8 | null | transformers | 13,229 | ---
language: id
tags:
- indonesian-roberta-base-indonli
license: mit
datasets:
- indonli
widget:
- text: "Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih."
---
## Indonesian RoBERTa Base IndoNLI
Indonesian RoBERTa Base IndoNLI is a natural language inference (NLI) model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`IndoNLI`](https://github.com/ir-nlp-csui/indonli)'s dataset consisting of Indonesian Wikipedia, news, and Web articles [1].
After training, the model achieved an evaluation/dev accuracy of 77.06%. On the benchmark `test_lay` subset, the model achieved an accuracy of 74.24% and on the benchmark `test_expert` subset, the model achieved an accuracy of 61.66%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| --------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-indonli` | 124M | RoBERTa Base | `IndoNLI` |
## Evaluation Results
The model was trained for 5 epochs, with a batch size of 16, a learning rate of 2e-5, a weight decay of 0.1, and a warmup ratio of 0.2, with linear annealing to 0. The best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy |
| ----- | ------------- | --------------- | -------- |
| 1 | 0.989200 | 0.691663 | 0.731452 |
| 2 | 0.673000 | 0.621913 | 0.766045 |
| 3 | 0.449900 | 0.662543 | 0.770596 |
| 4 | 0.293600 | 0.777059 | 0.768320 |
| 5 | 0.194200 | 0.948068 | 0.764224 |
## How to Use
### As NLI Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-indonli"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih.")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `IndoNLI` dataset that may be carried over into the results of this model.
## References
[1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://arxiv.org/abs/2110.14566). _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
## Author
Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
w11wo/javanese-gpt2-small-imdb | 05d4ddb29aaa40ddd955c4803cc9587603689c22 | 2022-02-14T16:19:56.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"jv",
"dataset:w11wo/imdb-javanese",
"transformers",
"javanese-gpt2-small-imdb",
"license:mit"
]
| text-generation | false | w11wo | null | w11wo/javanese-gpt2-small-imdb | 8 | null | transformers | 13,230 | ---
language: jv
tags:
- javanese-gpt2-small-imdb
license: mit
datasets:
- w11wo/imdb-javanese
widget:
- text: "Train to Busan yaiku film sing digawe ing Korea Selatan"
---
## Javanese GPT-2 Small IMDB
Javanese GPT-2 Small IMDB is a causal language model based on the [GPT-2 model](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese GPT-2 Small model](https://huggingface.co/w11wo/javanese-gpt2-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 60.54 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers]((https://huggingface.co/transformers)) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------|----------|-----------------|---------------------------------|
| `javanese-gpt2-small-imdb` | 124M | GPT-2 Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|------------|
| 4.135 | 4.103 | 60.54 | 6:22:40 |
## How to Use (PyTorch)
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-gpt2-small-imdb"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Jenengku Budi, saka Indonesia")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
pretrained_name = "w11wo/javanese-gpt2-small-imdb"
model = GPT2LMHeadModel.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese GPT-2 Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
yseop/FNP_T5_D2T_complete | 4ff17fd64712408c87dd98c820285c0af23b15dd | 2021-09-06T20:54:21.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | yseop | null | yseop/FNP_T5_D2T_complete | 8 | null | transformers | 13,231 | # T5-base data to text model specialized for Finance NLG
__complete version__
----
## Usage (HuggingFace Transformers)
#### Call the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yseop/FNP_T5_D2T_complete")
model = AutoModelForSeq2SeqLM.from_pretrained("yseop/FNP_T5_D2T_complete")
text = ["Group profit | valIs | € 115.7 million && € 115.7 million | dTime | in 2019"]
```
#### Choose a generation method
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
p = 0.82
k = 90
outputs = model.generate(input_ids,
do_sample=True,
top_p=p,
top_k=k,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
```python
input_ids = tokenizer.encode(": {}".format(text), return_tensors="pt")
outputs = model.generate(input_ids,
max_length=200,
num_beams=2, repetition_penalty=2.5,
top_k=50, top_p=0.98,
length_penalty=1.0,
early_stopping=True)
print(tokenizer.decode(outputs[0]))
```
**Created by:** [Yseop](https://www.yseop.com/) | Pioneer in Natural Language Generation (NLG) technology. Scaling human expertise through Natural Language Generation. |
zhuqing/roberta-base-uncased-AutoModelWithLMHeadnetmums-classification | d2036711c45ca1d9134ab3bd27abb7df6c721563 | 2021-08-21T07:06:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | zhuqing | null | zhuqing/roberta-base-uncased-AutoModelWithLMHeadnetmums-classification | 8 | null | transformers | 13,232 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-af | 88e4cbf6a20fa1f1742346005adab2a67294a2ed | 2022-02-25T09:58:01.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"af",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-af | 8 | null | transformers | 13,233 |
---
language:
- af
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-af
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 85.8
- type: accuracy
name: Dutch Test accuracy
value: 83.7
- type: accuracy
name: German Test accuracy
value: 83.6
- type: accuracy
name: Italian Test accuracy
value: 84.4
- type: accuracy
name: French Test accuracy
value: 83.1
- type: accuracy
name: Spanish Test accuracy
value: 86.7
- type: accuracy
name: Russian Test accuracy
value: 86.4
- type: accuracy
name: Swedish Test accuracy
value: 87.7
- type: accuracy
name: Norwegian Test accuracy
value: 81.3
- type: accuracy
name: Danish Test accuracy
value: 86.8
- type: accuracy
name: Low Saxon Test accuracy
value: 62.5
- type: accuracy
name: Akkadian Test accuracy
value: 28.6
- type: accuracy
name: Armenian Test accuracy
value: 82.7
- type: accuracy
name: Welsh Test accuracy
value: 70.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 72.5
- type: accuracy
name: Albanian Test accuracy
value: 79.4
- type: accuracy
name: Slovenian Test accuracy
value: 76.6
- type: accuracy
name: Guajajara Test accuracy
value: 23.2
- type: accuracy
name: Kurmanji Test accuracy
value: 74.7
- type: accuracy
name: Turkish Test accuracy
value: 72.8
- type: accuracy
name: Finnish Test accuracy
value: 83.9
- type: accuracy
name: Indonesian Test accuracy
value: 79.5
- type: accuracy
name: Ukrainian Test accuracy
value: 84.0
- type: accuracy
name: Polish Test accuracy
value: 85.6
- type: accuracy
name: Portuguese Test accuracy
value: 85.5
- type: accuracy
name: Kazakh Test accuracy
value: 77.5
- type: accuracy
name: Latin Test accuracy
value: 76.2
- type: accuracy
name: Old French Test accuracy
value: 58.4
- type: accuracy
name: Buryat Test accuracy
value: 59.7
- type: accuracy
name: Kaapor Test accuracy
value: 23.8
- type: accuracy
name: Korean Test accuracy
value: 59.4
- type: accuracy
name: Estonian Test accuracy
value: 86.7
- type: accuracy
name: Croatian Test accuracy
value: 86.4
- type: accuracy
name: Gothic Test accuracy
value: 20.7
- type: accuracy
name: Swiss German Test accuracy
value: 55.5
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 38.8
- type: accuracy
name: Naija Test accuracy
value: 39.3
- type: accuracy
name: Latvian Test accuracy
value: 83.0
- type: accuracy
name: Chinese Test accuracy
value: 49.8
- type: accuracy
name: Tagalog Test accuracy
value: 71.7
- type: accuracy
name: Bambara Test accuracy
value: 29.9
- type: accuracy
name: Lithuanian Test accuracy
value: 82.8
- type: accuracy
name: Galician Test accuracy
value: 83.6
- type: accuracy
name: Vietnamese Test accuracy
value: 60.3
- type: accuracy
name: Greek Test accuracy
value: 83.3
- type: accuracy
name: Catalan Test accuracy
value: 86.1
- type: accuracy
name: Czech Test accuracy
value: 85.1
- type: accuracy
name: Erzya Test accuracy
value: 43.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 50.1
- type: accuracy
name: Thai Test accuracy
value: 62.5
- type: accuracy
name: Marathi Test accuracy
value: 87.1
- type: accuracy
name: Basque Test accuracy
value: 76.2
- type: accuracy
name: Slovak Test accuracy
value: 84.8
- type: accuracy
name: Kiche Test accuracy
value: 34.1
- type: accuracy
name: Yoruba Test accuracy
value: 26.4
- type: accuracy
name: Warlpiri Test accuracy
value: 39.7
- type: accuracy
name: Tamil Test accuracy
value: 81.0
- type: accuracy
name: Maltese Test accuracy
value: 24.2
- type: accuracy
name: Ancient Greek Test accuracy
value: 59.3
- type: accuracy
name: Icelandic Test accuracy
value: 82.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.3
- type: accuracy
name: Urdu Test accuracy
value: 63.2
- type: accuracy
name: Romanian Test accuracy
value: 81.4
- type: accuracy
name: Persian Test accuracy
value: 75.4
- type: accuracy
name: Apurina Test accuracy
value: 32.2
- type: accuracy
name: Japanese Test accuracy
value: 35.9
- type: accuracy
name: Hungarian Test accuracy
value: 84.9
- type: accuracy
name: Hindi Test accuracy
value: 70.2
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.5
- type: accuracy
name: Komi Permyak Test accuracy
value: 46.0
- type: accuracy
name: Faroese Test accuracy
value: 76.5
- type: accuracy
name: Sanskrit Test accuracy
value: 32.4
- type: accuracy
name: Livvi Test accuracy
value: 66.5
- type: accuracy
name: Arabic Test accuracy
value: 79.7
- type: accuracy
name: Wolof Test accuracy
value: 31.8
- type: accuracy
name: Bulgarian Test accuracy
value: 87.0
- type: accuracy
name: Akuntsu Test accuracy
value: 24.4
- type: accuracy
name: Makurap Test accuracy
value: 15.1
- type: accuracy
name: Kangri Test accuracy
value: 49.6
- type: accuracy
name: Breton Test accuracy
value: 62.0
- type: accuracy
name: Telugu Test accuracy
value: 82.2
- type: accuracy
name: Cantonese Test accuracy
value: 52.4
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.0
- type: accuracy
name: Karelian Test accuracy
value: 73.1
- type: accuracy
name: Upper Sorbian Test accuracy
value: 74.2
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 37.3
- type: accuracy
name: Irish Test accuracy
value: 66.3
- type: accuracy
name: Nayini Test accuracy
value: 47.4
- type: accuracy
name: Munduruku Test accuracy
value: 19.0
- type: accuracy
name: Manx Test accuracy
value: 39.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 33.0
- type: accuracy
name: Afrikaans Test accuracy
value: 98.9
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 25.9
- type: accuracy
name: Belarusian Test accuracy
value: 86.4
- type: accuracy
name: Serbian Test accuracy
value: 87.0
- type: accuracy
name: Moksha Test accuracy
value: 42.9
- type: accuracy
name: Western Armenian Test accuracy
value: 80.0
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 59.4
- type: accuracy
name: Khunsari Test accuracy
value: 37.8
- type: accuracy
name: Hebrew Test accuracy
value: 84.4
- type: accuracy
name: Uyghur Test accuracy
value: 73.3
- type: accuracy
name: Chukchi Test accuracy
value: 33.3
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Afrikaans
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-af")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-af")
```
|
inovex/multi2convai-quality-de-mbert | bb6b367be447e2ee63b1251737ac0a8a09e8786f | 2022-03-01T09:00:39.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
]
| text-classification | false | inovex | null | inovex/multi2convai-quality-de-mbert | 8 | null | transformers | 13,234 | ---
tags:
- text-classification
widget:
- text: "Starte das Programm"
license: mit
language: de
---
# Multi2ConvAI-Quality: finetuned MBert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned MBert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-mbert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-mbert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
chitanda/merit-deberta-v2-xlarge-v1 | 1e60251d4308022827659b11a5dfa8a8b82f84c9 | 2022-02-26T01:09:07.000Z | [
"pytorch",
"deberta-v2",
"transformers",
"license:mit"
]
| null | false | chitanda | null | chitanda/merit-deberta-v2-xlarge-v1 | 8 | null | transformers | 13,235 | ---
license: mit
---
|
MhF/distilbert-base-uncased-finetuned-clinc | fbde9f295b55ed8380a186340e1992d64a666eab | 2022-02-25T08:55:45.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | MhF | null | MhF/distilbert-base-uncased-finetuned-clinc | 8 | null | transformers | 13,236 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9187096774193548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
- Accuracy: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 |
| 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 |
| 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 |
| 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 |
| 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k | 121eb3a1687bed01b795e85b88056a20ee393efd | 2022-02-25T13:04:28.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | vocab-transformers | null | vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k | 8 | null | transformers | 13,237 | #cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k
This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k) checkpoint. **Word embedding matrix has been frozen during training**.
You can load the model with [sentence-transformers](https://sbert.net):
```python
from sentence_transformers import CrossEncoder
from torch import nn
model = CrossEncoder(model_name, default_activation_function=nn.Identity())
```
Performance on TREC Deep Learning (nDCG@10):
- TREC-DL 19: 72.62
- TREC-DL 20: 73.22
|
ghadeermobasher/BC5CDR-Disease-imbalanced-bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 4c5eef35682d95acb5c106d26046c659f8bef515 | 2022-02-25T18:31:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-imbalanced-bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 8 | null | transformers | 13,238 | Entry not found |
smoeller/student-subject-questions | d7a8a47f01914e23abb0f700938d33f90e152316 | 2022-02-27T17:28:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | smoeller | null | smoeller/student-subject-questions | 8 | null | transformers | 13,239 | Entry not found |
ali2066/twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 | 8e9584973d3b2dd3a45ba0ec5e91d75b9a5e2389 | 2022-03-01T13:03:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 | 8 | null | transformers | 13,240 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4118
- Accuracy: 0.8446
- F1: 0.8968
- Precision: 0.8740
- Recall: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3532 | 0.8451 | 0.8990 | 0.8997 | 0.8983 |
| 0.4111 | 2.0 | 780 | 0.3381 | 0.8561 | 0.9080 | 0.8913 | 0.9253 |
| 0.3031 | 3.0 | 1170 | 0.3490 | 0.8537 | 0.9034 | 0.9152 | 0.8919 |
| 0.2408 | 4.0 | 1560 | 0.3562 | 0.8671 | 0.9148 | 0.9 | 0.9300 |
| 0.2408 | 5.0 | 1950 | 0.3725 | 0.8659 | 0.9131 | 0.9074 | 0.9189 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
batterydata/batterybert-uncased | 639cb9e0a427d6cbfbc51c8d9f8248e2a5541012 | 2022-03-05T16:18:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"en",
"dataset:batterypapers",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | false | batterydata | null | batterydata/batterybert-uncased | 8 | null | transformers | 13,241 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- batterypapers
---
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = BertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = TFBertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0317.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
batterydata/batterybert-uncased-abstract | 2c5826062f110b7a6256a5e8d290f127b109bb68 | 2022-03-05T14:52:59.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
]
| text-classification | false | batterydata | null | batterydata/batterybert-uncased-abstract | 8 | null | transformers | 13,242 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BatteryBERT-uncased for Battery Abstract Classification
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.10,
"Test accuracy": 96.94,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
batterydata/bert-base-cased-abstract | de0bac0fbc92bbadaf87dbd2782a2e22d2eb8ff6 | 2022-03-05T14:42:16.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:batterydata/paper-abstracts",
"transformers",
"Text Classification",
"license:apache-2.0"
]
| text-classification | false | batterydata | null | batterydata/bert-base-cased-abstract | 8 | null | transformers | 13,243 | ---
language: en
tags: Text Classification
license: apache-2.0
datasets:
- batterydata/paper-abstracts
metrics: glue
---
# BERT-base-cased for Battery Abstract Classification
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 15
base_LM_model = "bert-base-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.84,
"Test accuracy": 96.83,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
huggingartists/pink-floyd | 74ab4197b67d0068c3686a4c957dbeae2695c2d8 | 2022-03-02T09:18:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"dataset:huggingartists/pink-floyd",
"transformers",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm"
]
| text-generation | false | huggingartists | null | huggingartists/pink-floyd | 8 | null | transformers | 13,244 | ---
language: en
datasets:
- huggingartists/pink-floyd
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/6b5c50912d99c3cf0eabfec5f427c452.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pink Floyd</div>
<a href="https://genius.com/artists/pink-floyd">
<div style="text-align: center; font-size: 14px;">@pink-floyd</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Pink Floyd.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/pink-floyd).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/pink-floyd")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3j9osgks/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Pink Floyd's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/pink-floyd')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/pink-floyd")
model = AutoModelWithLMHead.from_pretrained("huggingartists/pink-floyd")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
sanchit-gandhi/wav2vec2-2-rnd-grid-search | 547a55be44be090cafca4b0e14803ca310c5713f | 2022-03-03T14:51:05.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-grid-search | 8 | null | transformers | 13,245 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9475
- Wer: 2.0097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.9006 | 1.68 | 1500 | 6.9507 | 2.0097 |
| 6.9503 | 3.36 | 3000 | 6.9475 | 2.0097 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
EvilMasterPlan/NER | be65d815905ec9ac62fe528e7a0c239fd2d2972f | 2022-03-03T16:13:19.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | false | EvilMasterPlan | null | EvilMasterPlan/NER | 8 | null | transformers | 13,246 | Entry not found |
xinzhel/gpt2-ag-news | 36fc81e7aad820ebc8921c74c1d2e04af24aafa0 | 2022-03-06T00:08:03.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers",
"license:apache-2.0"
]
| text-classification | false | xinzhel | null | xinzhel/gpt2-ag-news | 8 | 1 | transformers | 13,247 | ---
license: apache-2.0
---
|
Kuray107/librispeech-5h-supervised | 7744945765e168ffea72fd42d8ad37c35df19d4b | 2022-03-06T06:43:53.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | Kuray107 | null | Kuray107/librispeech-5h-supervised | 8 | null | transformers | 13,248 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: librispeech-5h-supervised
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# librispeech-5h-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2041
- Wer: 0.0624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7758 | 11.11 | 1000 | 0.3120 | 0.2337 |
| 0.1238 | 22.22 | 2000 | 0.1651 | 0.0826 |
| 0.0383 | 33.33 | 3000 | 0.1667 | 0.0712 |
| 0.023 | 44.44 | 4000 | 0.1893 | 0.0685 |
| 0.0166 | 55.56 | 5000 | 0.2008 | 0.0666 |
| 0.0131 | 66.67 | 6000 | 0.1942 | 0.0639 |
| 0.0106 | 77.78 | 7000 | 0.1979 | 0.0628 |
| 0.0091 | 88.89 | 8000 | 0.2027 | 0.0628 |
| 0.008 | 100.0 | 9000 | 0.2041 | 0.0624 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
billfrench/autonlp-cyberlandr-ai-4-614417500 | 402c21dd4c2b29da23309639e2440c36314a4673 | 2022-03-07T00:56:09.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:billfrench/autonlp-data-cyberlandr-ai-4",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | billfrench | null | billfrench/autonlp-cyberlandr-ai-4-614417500 | 8 | null | transformers | 13,249 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- billfrench/autonlp-data-cyberlandr-ai-4
co2_eq_emissions: 1.131603488976132
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 614417500
- CO2 Emissions (in grams): 1.131603488976132
## Validation Metrics
- Loss: 1.4588216543197632
- Accuracy: 0.3333333333333333
- Macro F1: 0.225
- Micro F1: 0.3333333333333333
- Weighted F1: 0.2333333333333333
- Macro Precision: 0.1875
- Micro Precision: 0.3333333333333333
- Weighted Precision: 0.20833333333333334
- Macro Recall: 0.375
- Micro Recall: 0.3333333333333333
- Weighted Recall: 0.3333333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/billfrench/autonlp-cyberlandr-ai-4-614417500
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417500", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("billfrench/autonlp-cyberlandr-ai-4-614417500", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
jkhan447/sentiment-model-sample-go-emotion | 592edba02a16ec512631e3b17c394d5bafc4a9bc | 2022-03-10T06:25:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:go_emotions",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jkhan447 | null | jkhan447/sentiment-model-sample-go-emotion | 8 | null | transformers | 13,250 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-go-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5827886710239651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-go-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2674
- Accuracy: 0.5828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
spy24/autonlp-parrot_paraphrasing-615317556 | f2694b40f6644421fbb8d34f0639760e2cbf861c | 2022-03-07T09:36:20.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:spy24/autonlp-data-parrot_paraphrasing",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | spy24 | null | spy24/autonlp-parrot_paraphrasing-615317556 | 8 | null | transformers | 13,251 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- spy24/autonlp-data-parrot_paraphrasing
co2_eq_emissions: 0.8335491678002559
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 615317556
- CO2 Emissions (in grams): 0.8335491678002559
## Validation Metrics
- Loss: 0.0001514342293376103
- Rouge1: 100.0
- Rouge2: 51.4451
- RougeL: 100.0
- RougeLsum: 100.0
- Gen Len: 4.104
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-parrot_paraphrasing-615317556
``` |
gayanin/t5-small-paraphrasing-mlm | a9b9e2ac2687a39adfe70dd4e2dd2c7785e5e147 | 2022-03-08T01:54:54.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | gayanin | null | gayanin/t5-small-paraphrasing-mlm | 8 | null | transformers | 13,252 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-paraphrasing-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-paraphrasing-mlm
This model is a fine-tuned version of [gayanin/t5-small-paraphrase-pubmed](https://huggingface.co/gayanin/t5-small-paraphrase-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7030
- Rouge2 Precision: 0.6576
- Rouge2 Recall: 0.4712
- Rouge2 Fmeasure: 0.532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9215 | 1.0 | 13833 | 0.8050 | 0.6352 | 0.454 | 0.5131 |
| 0.855 | 2.0 | 27666 | 0.7679 | 0.6411 | 0.4589 | 0.5184 |
| 0.8387 | 3.0 | 41499 | 0.7464 | 0.6464 | 0.4626 | 0.5226 |
| 0.8267 | 4.0 | 55332 | 0.7315 | 0.6513 | 0.4671 | 0.5273 |
| 0.7879 | 5.0 | 69165 | 0.7217 | 0.6534 | 0.4687 | 0.529 |
| 0.7738 | 6.0 | 82998 | 0.7142 | 0.6548 | 0.4688 | 0.5295 |
| 0.7793 | 7.0 | 96831 | 0.7094 | 0.6553 | 0.4694 | 0.53 |
| 0.7654 | 8.0 | 110664 | 0.7056 | 0.6573 | 0.4704 | 0.5313 |
| 0.7675 | 9.0 | 124497 | 0.7036 | 0.6577 | 0.4712 | 0.532 |
| 0.7662 | 10.0 | 138330 | 0.7030 | 0.6576 | 0.4712 | 0.532 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
huggingtweets/desertblooom-littlehorney-plusbibi1 | 0900e4b71b39cb06c5d740074adca8f95dfaca1c | 2022-03-08T08:02:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/desertblooom-littlehorney-plusbibi1 | 8 | null | transformers | 13,253 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1386970823681052680/oA_4HBKl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500170501603446792/xUkC2cSe_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500892464772751365/6uhqt-Jx_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bibi und Anna & Wüstenblume & Vanny_Bunny™</div>
<div style="text-align: center; font-size: 14px;">@desertblooom-littlehorney-plusbibi1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bibi und Anna & Wüstenblume & Vanny_Bunny™.
| Data | Bibi und Anna | Wüstenblume | Vanny_Bunny™ |
| --- | --- | --- | --- |
| Tweets downloaded | 1818 | 3250 | 3185 |
| Retweets | 9 | 59 | 494 |
| Short tweets | 341 | 810 | 339 |
| Tweets kept | 1468 | 2381 | 2352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/15il6uja/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @desertblooom-littlehorney-plusbibi1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lqcyodlp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lqcyodlp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/desertblooom-littlehorney-plusbibi1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aymanm419/araElectra-SQUAD-ARCD-768 | a823aab1a462542e93ad104633e4aebb43b97832 | 2022-03-08T22:18:43.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aymanm419 | null | aymanm419/araElectra-SQUAD-ARCD-768 | 8 | null | transformers | 13,254 | Entry not found |
jkhan447/sentiment-model-sample-ekman-emotion | b5a6fddb5fa025214bbe5ce30e51bcf5c54f966b | 2022-03-11T08:07:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jkhan447 | null | jkhan447/sentiment-model-sample-ekman-emotion | 8 | null | transformers | 13,255 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-ekman-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-ekman-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4963
- Accuracy: 0.6713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
richielo/small-e-czech-finetuned-ner-wikiann | d20223af629a24f25585b6f5b80d83322aea28f9 | 2022-03-12T20:18:42.000Z | [
"pytorch",
"tensorboard",
"electra",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | richielo | null | richielo/small-e-czech-finetuned-ner-wikiann | 8 | null | transformers | 13,256 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: small-e-czech-finetuned-ner-wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: cs
metrics:
- name: Precision
type: precision
value: 0.8713322894683097
- name: Recall
type: recall
value: 0.8970423324922905
- name: F1
type: f1
value: 0.8840004144075699
- name: Accuracy
type: accuracy
value: 0.9557089381093997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-e-czech-finetuned-ner-wikiann
This model is a fine-tuned version of [Seznam/small-e-czech](https://huggingface.co/Seznam/small-e-czech) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2547
- Precision: 0.8713
- Recall: 0.8970
- F1: 0.8840
- Accuracy: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2924 | 1.0 | 2500 | 0.2449 | 0.7686 | 0.8088 | 0.7882 | 0.9320 |
| 0.2042 | 2.0 | 5000 | 0.2137 | 0.8050 | 0.8398 | 0.8220 | 0.9400 |
| 0.1699 | 3.0 | 7500 | 0.1912 | 0.8236 | 0.8593 | 0.8411 | 0.9466 |
| 0.1419 | 4.0 | 10000 | 0.1931 | 0.8349 | 0.8671 | 0.8507 | 0.9488 |
| 0.1316 | 5.0 | 12500 | 0.1892 | 0.8470 | 0.8776 | 0.8620 | 0.9519 |
| 0.1042 | 6.0 | 15000 | 0.2058 | 0.8433 | 0.8811 | 0.8618 | 0.9508 |
| 0.0884 | 7.0 | 17500 | 0.2020 | 0.8602 | 0.8849 | 0.8724 | 0.9531 |
| 0.0902 | 8.0 | 20000 | 0.2118 | 0.8551 | 0.8837 | 0.8692 | 0.9528 |
| 0.0669 | 9.0 | 22500 | 0.2171 | 0.8634 | 0.8906 | 0.8768 | 0.9550 |
| 0.0529 | 10.0 | 25000 | 0.2228 | 0.8638 | 0.8912 | 0.8773 | 0.9545 |
| 0.0613 | 11.0 | 27500 | 0.2293 | 0.8626 | 0.8898 | 0.8760 | 0.9544 |
| 0.0549 | 12.0 | 30000 | 0.2276 | 0.8694 | 0.8958 | 0.8824 | 0.9554 |
| 0.0516 | 13.0 | 32500 | 0.2384 | 0.8717 | 0.8940 | 0.8827 | 0.9552 |
| 0.0412 | 14.0 | 35000 | 0.2443 | 0.8701 | 0.8931 | 0.8815 | 0.9554 |
| 0.0345 | 15.0 | 37500 | 0.2464 | 0.8723 | 0.8958 | 0.8839 | 0.9557 |
| 0.0412 | 16.0 | 40000 | 0.2477 | 0.8705 | 0.8948 | 0.8825 | 0.9552 |
| 0.0363 | 17.0 | 42500 | 0.2525 | 0.8742 | 0.8973 | 0.8856 | 0.9559 |
| 0.0341 | 18.0 | 45000 | 0.2529 | 0.8727 | 0.8962 | 0.8843 | 0.9561 |
| 0.0194 | 19.0 | 47500 | 0.2533 | 0.8699 | 0.8966 | 0.8830 | 0.9557 |
| 0.0247 | 20.0 | 50000 | 0.2547 | 0.8713 | 0.8970 | 0.8840 | 0.9557 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
cambridgeltl/sst_bert-base-uncased | 776a1f85114c62eb383db60dd2bb16b477bdc681 | 2022-03-14T16:54:28.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cambridgeltl | null | cambridgeltl/sst_bert-base-uncased | 8 | null | transformers | 13,257 | Entry not found |
ebrigham/yahoo_answers_topics_classifier | eac88a96f9ca361cf6edf42f4011625fe946cda8 | 2022-03-14T21:16:15.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | ebrigham | null | ebrigham/yahoo_answers_topics_classifier | 8 | null | transformers | 13,258 | Entry not found |
Neulvo/bert-finetuned-ner-accelerate | 23ff97370fdb5f34caace5791e60300b7b9658e3 | 2022-03-15T16:04:25.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Neulvo | null | Neulvo/bert-finetuned-ner-accelerate | 8 | null | transformers | 13,259 | Entry not found |
edbeeching/decision-transformer-gym-hopper-medium-replay | f36050d8e87062eceb103e19067b8eee7385d30e | 2022-06-29T19:20:14.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-hopper-medium-replay | 8 | null | transformers | 13,260 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium-replay trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.2305138, -0.04371411, -0.44542956, -0.09370098, 0.09094488, 1.3694725, -0.19992675, -0.02286135, -0.5287045, -0.14465883, -0.19652697]
std = [0.17565121, 0.06369286, 0.34383234, 0.19566889, 0.5547985, 1.0510299, 1.1583077, 0.79631287, 1.4802359, 1.6540332, 5.108601]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
edbeeching/decision-transformer-gym-walker2d-expert | 2658e071054b4795c9afa536c4bf47e9a5422184 | 2022-06-29T19:21:27.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-walker2d-expert | 8 | 1 | transformers | 13,261 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on expert trajectories sampled from the Gym Walker2d environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym Walker2d environment.
The following normlization coeficients are required to use this model:
mean = [ 1.2384834e+00, 1.9578537e-01, -1.0475016e-01, -1.8579608e-01, 2.3003316e-01, 2.2800924e-02, -3.7383768e-01, 3.3779100e-01, 3.9250960e+00, -4.7428459e-03, 2.5267061e-02, -3.9287535e-03, -1.7367510e-02, -4.8212224e-01, 3.5432147e-04, -3.7124525e-03, 2.6285544e-03]
std = [0.06664903, 0.16980624, 0.17309439, 0.21843709, 0.74599105, 0.02410989, 0.3729872, 0.6226182, 0.9708009, 0.72936815, 1.504065, 2.495893, 3.511518, 5.3656907, 0.79503316, 4.317483, 6.1784487]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
SaulLu/distilbert-copy | 9ef3f70fbed96e92a51b9c917954ef6704297afc | 2022-03-17T11:33:30.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
]
| feature-extraction | false | SaulLu | null | SaulLu/distilbert-copy | 8 | null | transformers | 13,262 | Entry not found |
MickyMike/VulRepair | 79873a95954de61f6291090f90fe4c80e2289d47 | 2022-03-17T15:24:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | MickyMike | null | MickyMike/VulRepair | 8 | null | transformers | 13,263 | ---
license: mit
---
|
anton-l/xtreme_s_xlsr_300m_minds14 | b0978180a23cd78a4f8df223fb37b7684404be56 | 2022-04-03T18:54:43.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"all",
"dataset:google/xtreme_s",
"transformers",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_minds14 | 8 | null | transformers | 13,264 | ---
language:
- all
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_300m_minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_minds14
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.ALL dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9033
- Accuracy Cs-cz: 0.9164
- Accuracy De-de: 0.9477
- Accuracy En-au: 0.9235
- Accuracy En-gb: 0.9324
- Accuracy En-us: 0.9326
- Accuracy Es-es: 0.9177
- Accuracy Fr-fr: 0.9444
- Accuracy It-it: 0.9167
- Accuracy Ko-kr: 0.8649
- Accuracy Nl-nl: 0.9450
- Accuracy Pl-pl: 0.9146
- Accuracy Pt-pt: 0.8940
- Accuracy Ru-ru: 0.8667
- Accuracy Zh-cn: 0.7291
- F1: 0.9015
- F1 Cs-cz: 0.9154
- F1 De-de: 0.9467
- F1 En-au: 0.9199
- F1 En-gb: 0.9334
- F1 En-us: 0.9308
- F1 Es-es: 0.9158
- F1 Fr-fr: 0.9436
- F1 It-it: 0.9135
- F1 Ko-kr: 0.8642
- F1 Nl-nl: 0.9440
- F1 Pl-pl: 0.9159
- F1 Pt-pt: 0.8883
- F1 Ru-ru: 0.8646
- F1 Zh-cn: 0.7249
- Loss: 0.4119
- Loss Cs-cz: 0.3790
- Loss De-de: 0.2649
- Loss En-au: 0.3459
- Loss En-gb: 0.2853
- Loss En-us: 0.2203
- Loss Es-es: 0.2731
- Loss Fr-fr: 0.1909
- Loss It-it: 0.3520
- Loss Ko-kr: 0.5431
- Loss Nl-nl: 0.2515
- Loss Pl-pl: 0.4113
- Loss Pt-pt: 0.4798
- Loss Ru-ru: 0.6470
- Loss Zh-cn: 1.1216
- Predict Samples: 4086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.6739 | 5.41 | 200 | 2.5687 | 0.0430 | 0.1190 |
| 1.4953 | 10.81 | 400 | 1.6052 | 0.5550 | 0.5692 |
| 0.6177 | 16.22 | 600 | 0.7927 | 0.8052 | 0.8011 |
| 0.3609 | 21.62 | 800 | 0.5679 | 0.8609 | 0.8609 |
| 0.4972 | 27.03 | 1000 | 0.5944 | 0.8509 | 0.8523 |
| 0.1799 | 32.43 | 1200 | 0.6194 | 0.8623 | 0.8621 |
| 0.1308 | 37.84 | 1400 | 0.5956 | 0.8569 | 0.8548 |
| 0.2298 | 43.24 | 1600 | 0.5201 | 0.8732 | 0.8743 |
| 0.0052 | 48.65 | 1800 | 0.3826 | 0.9106 | 0.9103 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
|
brad1141/Longformer-finetuned-comp5 | fc3bfa12b09a2ae2c5d5b40d30b95776c914c85e | 2022-03-18T02:21:19.000Z | [
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/Longformer-finetuned-comp5 | 8 | null | transformers | 13,265 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer-finetuned-comp5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer-finetuned-comp5
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8180
- Precision: 0.5680
- Recall: 0.7490
- F1: 0.6430
- Accuracy: 0.6430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8296 | 1.0 | 1012 | 0.5801 | 0.4806 | 0.6633 | 0.5448 | 0.5448 |
| 0.5367 | 2.0 | 2024 | 0.5386 | 0.5617 | 0.7042 | 0.6172 | 0.6172 |
| 0.4109 | 3.0 | 3036 | 0.5755 | 0.5590 | 0.7261 | 0.6248 | 0.6248 |
| 0.3088 | 4.0 | 4048 | 0.6167 | 0.5775 | 0.7394 | 0.6435 | 0.6435 |
| 0.2234 | 5.0 | 5060 | 0.7098 | 0.5626 | 0.7477 | 0.6370 | 0.6370 |
| 0.1637 | 6.0 | 6072 | 0.7399 | 0.5742 | 0.7413 | 0.6438 | 0.6438 |
| 0.1236 | 7.0 | 7084 | 0.8180 | 0.5680 | 0.7490 | 0.6430 | 0.6430 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/regnet-y-006 | c4c145ba7c79cfad0a12688d07246ffe38a1a793 | 2022-06-30T10:14:07.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-006 | 8 | null | transformers | 13,266 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
EMBO/sd-smallmol-roles | 2c103fd6953b6e70dcd88e5a10763492b82bdd3b | 2022-03-27T13:28:53.000Z | [
"pytorch",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-nlp",
"transformers",
"token classification",
"license:agpl-3.0",
"autotrain_compatible"
]
| token-classification | false | EMBO | null | EMBO/sd-smallmol-roles | 8 | null | transformers | 13,267 | ---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-smallmol-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It has then been fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `SMALL_MOL_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of small molecules with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-smallmol-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: SMALL_MOL_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.33
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.76 0.90 0.83 2946
MEASURED_VAR 0.60 0.71 0.65 852
micro avg 0.73 0.86 0.79 3798
macro avg 0.68 0.80 0.74 3798
weighted avg 0.73 0.86 0.79 3798
{'test_loss': 0.011743436567485332, 'test_accuracy_score': 0.9951612532624371, 'test_precision': 0.7261345852895149, 'test_recall': 0.8551869404949973, 'test_f1': 0.7853947527505744, 'test_runtime': 58.0378, 'test_samples_per_second': 123.678, 'test_steps_per_second': 1.947}
```
|
msamogh/autonlp-cai-out-of-scope-649919116 | b3e4ed2896fe8ef196c2b64ea5782a3ef1ec775a | 2022-03-22T15:27:18.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | msamogh | null | msamogh/autonlp-cai-out-of-scope-649919116 | 8 | null | transformers | 13,268 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 2.438401649319185
---
# What do the class labels mean?
0 - out of scope
1 - in scope
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919116
- CO2 Emissions (in grams): 2.438401649319185
## Validation Metrics
- Loss: 0.5314930081367493
- Accuracy: 0.7526881720430108
- Precision: 0.8490566037735849
- Recall: 0.75
- AUC: 0.8515151515151514
- F1: 0.7964601769911505
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919116
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
vinaykudari/gpt2-acled-t2s | 58cb4e2471efad781d743c4bbf11fd43e76bd6cc | 2022-03-20T14:26:41.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | vinaykudari | null | vinaykudari/gpt2-acled-t2s | 8 | null | transformers | 13,269 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-acled-t2s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-acled-t2s
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2978 | 1.0 | 6621 | 1.2262 |
| 1.0378 | 2.0 | 13242 | 1.0048 |
| 0.9537 | 3.0 | 19863 | 0.9414 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter-regularisation | 59f6f47d152e8f548a6532e6d97291b6c0fe387c | 2022-03-25T03:10:23.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-rnd-no-adapter-regularisation | 8 | null | transformers | 13,270 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7177
- Wer: 0.1283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1228 | 1.68 | 1500 | 6.0490 | 1.1433 |
| 5.4173 | 3.36 | 3000 | 5.3453 | 1.4878 |
| 4.1635 | 5.04 | 4500 | 4.4185 | 0.9644 |
| 2.1246 | 6.73 | 6000 | 3.2089 | 0.5026 |
| 1.88 | 8.41 | 7500 | 1.9886 | 0.3438 |
| 1.2606 | 10.09 | 9000 | 1.4472 | 0.2487 |
| 0.7492 | 11.77 | 10500 | 1.1716 | 0.1949 |
| 0.8868 | 13.45 | 12000 | 1.0146 | 0.1702 |
| 0.5078 | 15.13 | 13500 | 0.8821 | 0.1548 |
| 0.4515 | 16.82 | 15000 | 0.8181 | 0.1417 |
| 0.3902 | 18.5 | 16500 | 0.7765 | 0.1364 |
| 0.3575 | 20.18 | 18000 | 0.7367 | 0.1333 |
| 0.2903 | 21.86 | 19500 | 0.7211 | 0.1301 |
| 0.2698 | 23.54 | 21000 | 0.7177 | 0.1283 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sanchit-gandhi/wav2vec2-2-bart-large-cnn | c485867fb1d867c28b3522199797a7468b0385d6 | 2022-03-29T00:24:41.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"dataset:librispeech_asr",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-2-bart-large-cnn | 8 | null | transformers | 13,271 | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3524
- Wer: 0.1042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7605 | 4.5 | 500 | 2.6299 | 1.4451 |
| 0.1177 | 9.01 | 1000 | 0.3524 | 0.1042 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ICLbioengNLP/CXR_BioClinicalBERT_chunkedv1 | 185aa039c5de1569ff3a0b7fe4b10afa8c9c5c6c | 2022-03-23T19:21:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ICLbioengNLP | null | ICLbioengNLP/CXR_BioClinicalBERT_chunkedv1 | 8 | null | transformers | 13,272 | Entry not found |
agdsga/bert-finetuned-ner-accelerate | ab1075b22d7defa7c1ecf9a78315b6ed4bb0bfe6 | 2022-03-25T03:39:13.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | agdsga | null | agdsga/bert-finetuned-ner-accelerate | 8 | null | transformers | 13,273 | Entry not found |
sebastian-hofstaetter/colberter-128-32-msmarco | e7e6442a77850b5baeeb0110c7c3d17b88b0cb02 | 2022-03-27T15:07:44.000Z | [
"pytorch",
"ColBERT",
"en",
"dataset:ms_marco",
"arxiv:2203.13088",
"transformers",
"bag-of-words",
"dense-passage-retrieval",
"knowledge-distillation",
"license:apache-2.0"
]
| null | false | sebastian-hofstaetter | null | sebastian-hofstaetter/colberter-128-32-msmarco | 8 | null | transformers | 13,274 | ---
license: apache-2.0
language: "en"
tags:
- bag-of-words
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# ColBERTer (Dim: 32) for Passage Retrieval
If you want to know more about our ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉
For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter
## Limitations & Bias
- The model is only trained on english text.
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@article{Hofstaetter2022_colberter,
author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury},
title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction},
publisher = {arXiv},
url = {https://arxiv.org/abs/2203.13088},
doi = {10.48550/ARXIV.2203.13088},
year = {2022},
}
``` |
hackathon-pln-es/gpt2-small-spanish-disco-poetry | 3ea1637aadd5d9b489e0665dc2f5085e8308084c | 2022-04-03T00:10:08.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | hackathon-pln-es | null | hackathon-pln-es/gpt2-small-spanish-disco-poetry | 8 | 4 | transformers | 13,275 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-spanish-disco-poetry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-disco-poetry
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an [DISCO dataset](https://huggingface.co/datasets/hackathon-pln-es/disco_spanish_poetry) dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sarahmiller137/distilbert-base-uncased-ft-ncbi-disease | 6ac70e8a70d7f1ea1a448ae57aad715f6195d4fd | 2022-07-28T16:04:59.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:ncbi_disease",
"transformers",
"license:cc",
"autotrain_compatible"
]
| token-classification | false | sarahmiller137 | null | sarahmiller137/distilbert-base-uncased-ft-ncbi-disease | 8 | null | transformers | 13,276 | ---
language:
- en
tags:
- token-classification
task:
- token classification
license: cc
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
---
## Model information:
distilibert-base-uncased model finetuned using the ncbi_disease dataset from the datasets library.
## Intended uses:
This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details.
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model -
- [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf)
- [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-ncbi-disease")
model = AutoModel.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-ncbi-disease")
```
|
wuyue1987/twitter-roberta-base-sentiment-finetuned | 4c00a3fb148e0a62c035b291abd4269648805645 | 2022-03-31T03:10:56.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | wuyue1987 | null | wuyue1987/twitter-roberta-base-sentiment-finetuned | 8 | null | transformers | 13,277 | Entry not found |
clapika2010/training | 22d4b40f2414c01251bd434f83b7c97d30a4e3bc | 2022-04-14T23:10:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | clapika2010 | null | clapika2010/training | 8 | null | transformers | 13,278 | Entry not found |
jaygala24/distilroberta-base-finetuned-fake-news-english | e05fffc6b138a0adb6ba9ccc2ea53af7b9b9ce47 | 2022-04-02T15:52:11.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"en",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | jaygala24 | null | jaygala24/distilroberta-base-finetuned-fake-news-english | 8 | 0 | transformers | 13,279 | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilroberta-base-finetuned-fake-news-english
results: []
widget:
- text: "Wisconsin has not counted more votes than it has registered voters. This tweet is comparing the vote count from 2020 with the number of registered voters from 2018. When we take a look at Wisconsin’s current total of registered voters, we see that there is nothing fraudulent about the state’s count."
example_title: fake
- text: "Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the United States."
example_title: real
---
# distilroberta-base-finetuned-fake-news-english
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the [fake-and-real news](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0020
- Accuracy: 0.9997
- F1: 0.9997
- Precision: 0.9994
- Recall: 1.0
- Auc: 0.9997
## Intended uses & limitations
The model may not work with the articles over 512 tokens after preprocessing as the model's context is restricted to a maximum of 512 tokens in the sequence.
## Training and evaluation data
The [fake-and-real news](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset contains a total of 44,898 annotated articles with 21,417 real and 23,481 fake. The dataset was stratified split into train, validation, and test subsets with a proportion of 60:20:20 respectively. The model was fine-tuned on the train subset and evaluated on validation and test subsets.
| Split | # examples |
|:----------:|:----------:|
| train | 17959 |
| validation | 13469 |
| test | 13470 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 224
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.251 | 0.36 | 200 | 0.0030 | 0.9996 | 0.9995 | 0.9995 | 0.9995 | 0.9996 |
| 0.0022 | 0.71 | 400 | 0.0012 | 0.9998 | 0.9998 | 0.9995 | 1.0 | 0.9998 |
| 0.0013 | 1.07 | 600 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 1.43 | 800 | 0.0015 | 0.9997 | 0.9997 | 0.9994 | 1.0 | 0.9997 |
| 0.0013 | 1.78 | 1000 | 0.0020 | 0.9997 | 0.9997 | 0.9994 | 1.0 | 0.9997 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
magitz/distilbert-base-uncased-finetuned-emotion | 5af6d06131b4020439d4319fa0181975be78cb0e | 2022-03-31T20:48:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | magitz | null | magitz/distilbert-base-uncased-finetuned-emotion | 8 | null | transformers | 13,280 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9267965474109292
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2235
- Accuracy: 0.9265
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8101 | 1.0 | 250 | 0.3177 | 0.9045 | 0.9010 |
| 0.2472 | 2.0 | 500 | 0.2235 | 0.9265 | 0.9268 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
blacktree/distilbert-base-uncased-finetuned-sst2 | df806a7ab2ef7a1de7349301eb0c2f2676437408 | 2022-04-04T10:44:22.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | blacktree | null | blacktree/distilbert-base-uncased-finetuned-sst2 | 8 | null | transformers | 13,281 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7027
- Accuracy: 0.5092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 1.0 | 1053 | 0.7027 | 0.5092 |
| 0.6868 | 2.0 | 2106 | 0.7027 | 0.5092 |
| 0.6867 | 3.0 | 3159 | 0.6970 | 0.5092 |
| 0.687 | 4.0 | 4212 | 0.6992 | 0.5092 |
| 0.6866 | 5.0 | 5265 | 0.6983 | 0.5092 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
hackathon-pln-es/itama | 36503ae449d00ea8e3342e68f422d7f4e259a4dd | 2022-04-04T03:47:32.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | hackathon-pln-es | null | hackathon-pln-es/itama | 8 | 1 | transformers | 13,282 | # Generación de respuestas a preguntas AMA para profesiones
El modelo presentando a continuación se ha generado a partir del [dataset de preguntas AMA desde Reddit (ITAMA-DataSet)](https://huggingface.co/datasets/hackathon-pln-es/ITAMA-DataSet). En especial, se pueden realizar preguntas sobre las siguientes profesiones: `medico`, `psicologo`, `ciencias`, `ingeniero`, `profesor`, `jefe` y `abogado`.
# Modo de uso
Al ser un modelo generado a partir de mT5, es necesario incluir como prefijo la profesión y luego la sentencia de la forma:
```
<profesion>: <pregunta>
```
## Algunos ejemplos de preguntas:
| Texto de entrada | Texto generado |
|-------------------------------------------------------|----------------|
| ingeniero: qué es lo que más te gusta de tu trabajo? | Es el lenguaje del tráfico, lo que mas me gusta es el conocimiento de programación. Lo que mas me gusta es la idea de qué diseñar un modelo |
| psicologo: qué es lo que más te gusta de tu trabajo? | Una que lo que más me gusta de verdad es que la persona que se siente tener en serio problemas y de ansiedad, siempre es común que los psicólogos tengan que estar presente para tener en cuenta que no pueden hacerlo bien a la gente |
| abogado: cuanto dinero ganas al año? | No gano tanto dinero que gano, pero si de hecho gano minimo 40 mil pesos al mes. |
| ciencias: cuando dinero ganas al año? | No gano ahí mucho más de un año. |
| medico: cuando dinero ganas al año? | No gano dinero, gano minimo 40 dlrs x hora (minimo tengo 12-18 y tengo unos 34 dlr) |
| profesor: cuando dinero ganas al año? | Literalmente cuando son almuerzos y minimo y no tenes idea |
| jefe: qué me recomiendas hacer? | Actividades placentales, hacer ejercicios y enfrentar a las emergencias |
# Parámetros usados en el entrenamiento
```
model_args.num_train_epochs = 10
model_args.overwrite_output_dir = True
model_args.fp16 = False
model_args.use_multiprocessing = False
model_args.use_multiprocessing_for_evaluation = False
model_args.use_multiprocessed_decoding = False
model_args.learning_rate=0.001
model_args.train_batch_size = 8
model_args.eval_batch_size = 8
model_args.adafactor_beta1 = 0
model_args.length_penalty=1.5
model_args.max_length=100
model_args.max_seq_length = 100
``` |
anton-l/xtreme_s_xlsr_300m_fleurs_langid_quicker_warmup | e6717208c69bb5f4f3cdb31941aeea7eb4383104 | 2022-04-05T23:16:38.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"dataset:xtreme_s",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | anton-l | null | anton-l/xtreme_s_xlsr_300m_fleurs_langid_quicker_warmup | 8 | null | transformers | 13,283 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- accuracy
model-index:
- name: xtreme_s_xlsr_300m_fleurs_langid_quicker_warmup
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_langid_quicker_warmup
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9765
- Accuracy: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.6644 | 0.26 | 1000 | 0.3071 | 3.2482 |
| 0.394 | 0.52 | 2000 | 0.5948 | 1.8833 |
| 0.1034 | 0.78 | 3000 | 0.6297 | 1.5852 |
| 0.1088 | 1.04 | 4000 | 0.5992 | 1.7903 |
| 0.0032 | 1.3 | 5000 | 0.6356 | 1.6219 |
| 0.1813 | 1.56 | 6000 | 0.5788 | 1.8168 |
| 0.0654 | 1.82 | 7000 | 0.6234 | 1.6089 |
| 0.0144 | 2.08 | 8000 | 0.6424 | 1.6071 |
| 0.0019 | 2.34 | 9000 | 0.5822 | 1.7820 |
| 0.0159 | 2.6 | 10000 | 0.6043 | 1.8407 |
| 0.0029 | 2.86 | 11000 | 0.5845 | 1.8600 |
| 0.0458 | 3.12 | 12000 | 0.6299 | 1.6591 |
| 0.013 | 3.38 | 13000 | 0.5903 | 2.0788 |
| 0.003 | 3.64 | 14000 | 0.6188 | 1.7645 |
| 0.0015 | 3.9 | 15000 | 0.6328 | 1.7739 |
| 0.0003 | 4.16 | 16000 | 0.6072 | 1.8742 |
| 0.0005 | 4.42 | 17000 | 0.6231 | 1.7102 |
| 0.006 | 4.68 | 18000 | 0.6122 | 1.6909 |
| 0.2367 | 4.93 | 19000 | 0.6029 | 1.9891 |
| 0.005 | 5.19 | 20000 | 0.6220 | 1.7245 |
| 0.0813 | 5.45 | 21000 | 0.5739 | 2.0495 |
| 0.1233 | 5.71 | 22000 | 0.6104 | 1.9601 |
| 0.0003 | 5.97 | 23000 | 0.5924 | 1.8881 |
| 0.0003 | 6.23 | 24000 | 0.6055 | 1.9568 |
| 0.0001 | 6.49 | 25000 | 0.6086 | 1.8489 |
| 0.2198 | 6.75 | 26000 | 0.6292 | 1.8048 |
| 0.0261 | 7.01 | 27000 | 2.0284 | 0.5989 |
| 0.0001 | 7.27 | 28000 | 1.7323 | 0.6431 |
| 0.0001 | 7.53 | 29000 | 1.9329 | 0.6310 |
| 0.0011 | 7.79 | 30000 | 1.9256 | 0.6107 |
| 0.0933 | 8.05 | 31000 | 2.3915 | 0.5896 |
| 0.0001 | 8.31 | 32000 | 1.9948 | 0.6021 |
| 0.0003 | 8.57 | 33000 | 1.9518 | 0.6126 |
| 0.0005 | 8.83 | 34000 | 1.8935 | 0.6243 |
| 0.0 | 9.09 | 35000 | 2.0177 | 0.6144 |
| 0.0002 | 9.35 | 36000 | 2.0234 | 0.6174 |
| 0.0 | 9.61 | 37000 | 1.9568 | 0.6216 |
| 0.0 | 9.87 | 38000 | 1.9765 | 0.6199 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
efederici/cross-encoder-umberto-stsb | 7fa457e84823eb8a469800a127280212263f2409 | 2022-04-04T16:09:44.000Z | [
"pytorch",
"camembert",
"text-classification",
"it",
"dataset:stsb_multi_mt",
"transformers",
"cross-encoder",
"sentence-similarity"
]
| text-classification | false | efederici | null | efederici/cross-encoder-umberto-stsb | 8 | null | transformers | 13,284 | ---
pipeline_tag: text-classification
language:
- it
datasets:
- stsb_multi_mt
tags:
- cross-encoder
- sentence-similarity
- transformers
---
# Cross-Encoder
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
<p align="center">
<img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br>
Marco Lodola, Monument to Umberto Eco, Alessandria 2019
</p>
## Training Data
This model was trained on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train). The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.
## Usage and Performance
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('efederici/cross-encoder-umberto-stsb')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. |
JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset | 2a8c2d7813b5115cb7f5b610603493851d63fa96 | 2022-04-05T07:28:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | JNK789 | null | JNK789/distilbert-base-uncased-finetuned-tweets-emoji-dataset | 8 | null | transformers | 13,285 | Entry not found |
Graphcore/hubert-base-superb-ks | 198d5b97422b8ebed90bd39fe28c73a63f768dcd | 2022-05-23T23:18:24.000Z | [
"pytorch",
"tensorboard",
"hubert",
"text-classification",
"dataset:superb",
"transformers",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Graphcore | null | Graphcore/hubert-base-superb-ks | 8 | null | transformers | 13,286 | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: hubert-base-superb-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-superb-ks
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0848
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cammiemw/marco-cw09 | 416d6f29b0b9b91a045f7d9565476487069b1c28 | 2022-04-05T19:40:26.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | cammiemw | null | cammiemw/marco-cw09 | 8 | null | transformers | 13,287 | Entry not found |
kyryl0s/gpt2-uk-xxs | 1e407fc557d439b3e3cc74615a5fd186b8434bf9 | 2022-05-02T09:14:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"uk",
"transformers",
"license:afl-3.0"
]
| text-generation | false | kyryl0s | null | kyryl0s/gpt2-uk-xxs | 8 | null | transformers | 13,288 | ---
license: afl-3.0
language: uk
---
## GPT2 being trained on Ukrainian news.
### General info:
The model is not ready yet but I'm working on it. It also has a relatively small context window, which makes it quite uninteresting.
### Example of usage:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kyryl0s/gpt2-uk-xxs")
model = AutoModelForCausalLM.from_pretrained("kyryl0s/gpt2-uk-xxs")
input_ids = tokenizer.encode("Путін — ", add_special_tokens=False, return_tensors='pt')
outputs = model.generate(
input_ids,
do_sample=True,
num_return_sequences=3,
max_length=50
)
for i, out in enumerate(outputs):
print("{}: {}".format(i, tokenizer.decode(out)))
``` |
raileymontalan/distilbert-base-casedfinetuned-fake-news-detection | 02d0ad01e82617cbfa3effe493d857035c1c719c | 2022-04-06T17:12:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | raileymontalan | null | raileymontalan/distilbert-base-casedfinetuned-fake-news-detection | 8 | null | transformers | 13,289 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-casedfinetuned-fake-news-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-casedfinetuned-fake-news-detection
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the [Fake and Reals News](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0019
- F1: 0.9998
- Accuracy: 0.9998
The [Fake and Reals News](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) dataset was used. It was stratified split into a train-val-test set (60/20/20).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 1684 | 0.0021 | 0.9998 | 0.9998 |
| No log | 2.0 | 3368 | 0.0019 | 0.9998 | 0.9998 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
abdusahmbzuai/aradia-ctc-distilhubert-ft | 20e8824d43e2f76b2764cdebe3151510740cc67d | 2022-04-07T02:06:55.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"transformers",
"abdusahmbzuai/arabic_speech_massive_sm",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | abdusahmbzuai | null | abdusahmbzuai/aradia-ctc-distilhubert-ft | 8 | null | transformers | 13,290 | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_sm
- generated_from_trainer
model-index:
- name: aradia-ctc-distilhubert-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-distilhubert-ft
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_SM - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7114
- Wer: 0.8908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.43 | 100 | 4.4129 | 1.0 |
| No log | 0.87 | 200 | 3.5927 | 1.0 |
| No log | 1.3 | 300 | 3.3780 | 1.0 |
| No log | 1.74 | 400 | 3.0830 | 1.0 |
| 5.3551 | 2.17 | 500 | 2.6278 | 0.9999 |
| 5.3551 | 2.61 | 600 | 1.8359 | 1.0000 |
| 5.3551 | 3.04 | 700 | 1.7878 | 0.9914 |
| 5.3551 | 3.48 | 800 | 1.5219 | 0.9875 |
| 5.3551 | 3.91 | 900 | 1.4348 | 0.9879 |
| 1.7199 | 4.35 | 1000 | 1.4354 | 0.9644 |
| 1.7199 | 4.78 | 1100 | 1.5210 | 0.9519 |
| 1.7199 | 5.22 | 1200 | 1.3607 | 0.9475 |
| 1.7199 | 5.65 | 1300 | 1.3839 | 0.9343 |
| 1.7199 | 6.09 | 1400 | 1.2806 | 0.8944 |
| 1.2342 | 6.52 | 1500 | 1.3036 | 0.9011 |
| 1.2342 | 6.95 | 1600 | 1.3704 | 0.9072 |
| 1.2342 | 7.39 | 1700 | 1.2981 | 0.8891 |
| 1.2342 | 7.82 | 1800 | 1.2786 | 0.8733 |
| 1.2342 | 8.26 | 1900 | 1.2897 | 0.8867 |
| 0.9831 | 8.69 | 2000 | 1.4436 | 0.8780 |
| 0.9831 | 9.13 | 2100 | 1.3680 | 0.8873 |
| 0.9831 | 9.56 | 2200 | 1.3471 | 0.8692 |
| 0.9831 | 10.0 | 2300 | 1.3725 | 0.8729 |
| 0.9831 | 10.43 | 2400 | 1.4439 | 0.8771 |
| 0.8071 | 10.87 | 2500 | 1.5114 | 0.8928 |
| 0.8071 | 11.3 | 2600 | 1.6156 | 0.8958 |
| 0.8071 | 11.74 | 2700 | 1.4381 | 0.8749 |
| 0.8071 | 12.17 | 2800 | 1.5088 | 0.8717 |
| 0.8071 | 12.61 | 2900 | 1.5486 | 0.8813 |
| 0.6321 | 13.04 | 3000 | 1.4536 | 0.8884 |
| 0.6321 | 13.48 | 3100 | 1.4679 | 0.8947 |
| 0.6321 | 13.91 | 3200 | 1.5628 | 0.9117 |
| 0.6321 | 14.35 | 3300 | 1.5831 | 0.8716 |
| 0.6321 | 14.78 | 3400 | 1.6733 | 0.8702 |
| 0.4998 | 15.22 | 3500 | 1.8225 | 0.8665 |
| 0.4998 | 15.65 | 3600 | 1.8558 | 0.8732 |
| 0.4998 | 16.09 | 3700 | 1.7513 | 0.8766 |
| 0.4998 | 16.52 | 3800 | 1.8562 | 0.8753 |
| 0.4998 | 16.95 | 3900 | 1.9018 | 0.8704 |
| 0.4421 | 17.39 | 4000 | 1.9341 | 0.8789 |
| 0.4421 | 17.82 | 4100 | 1.9582 | 0.8781 |
| 0.4421 | 18.26 | 4200 | 1.8863 | 0.8821 |
| 0.4421 | 18.69 | 4300 | 1.9366 | 0.8847 |
| 0.4421 | 19.13 | 4400 | 2.1902 | 0.8721 |
| 0.3712 | 19.56 | 4500 | 2.1641 | 0.8670 |
| 0.3712 | 20.0 | 4600 | 2.1639 | 0.8776 |
| 0.3712 | 20.43 | 4700 | 2.2695 | 0.9030 |
| 0.3712 | 20.87 | 4800 | 2.1909 | 0.8937 |
| 0.3712 | 21.3 | 4900 | 2.1606 | 0.8959 |
| 0.3067 | 21.74 | 5000 | 2.1756 | 0.8943 |
| 0.3067 | 22.17 | 5100 | 2.4092 | 0.8773 |
| 0.3067 | 22.61 | 5200 | 2.4991 | 0.8721 |
| 0.3067 | 23.04 | 5300 | 2.3340 | 0.8910 |
| 0.3067 | 23.48 | 5400 | 2.3567 | 0.8946 |
| 0.2764 | 23.91 | 5500 | 2.3215 | 0.8897 |
| 0.2764 | 24.35 | 5600 | 2.4824 | 0.9002 |
| 0.2764 | 24.78 | 5700 | 2.4585 | 0.8963 |
| 0.2764 | 25.22 | 5800 | 2.5804 | 0.8879 |
| 0.2764 | 25.65 | 5900 | 2.5814 | 0.8903 |
| 0.2593 | 26.09 | 6000 | 2.5374 | 0.8868 |
| 0.2593 | 26.52 | 6100 | 2.5346 | 0.8922 |
| 0.2593 | 26.95 | 6200 | 2.5465 | 0.8873 |
| 0.2593 | 27.39 | 6300 | 2.6002 | 0.8919 |
| 0.2593 | 27.82 | 6400 | 2.6102 | 0.8928 |
| 0.227 | 28.26 | 6500 | 2.6925 | 0.8914 |
| 0.227 | 28.69 | 6600 | 2.6981 | 0.8913 |
| 0.227 | 29.13 | 6700 | 2.6872 | 0.8891 |
| 0.227 | 29.56 | 6800 | 2.7015 | 0.8897 |
| 0.227 | 30.0 | 6900 | 2.7114 | 0.8908 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
junaidamk/MuRIL-WIKINER-Malayalam | f3450f62ab122bbf34fe6837d8eaf63f55e70385 | 2022-04-07T01:27:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | junaidamk | null | junaidamk/MuRIL-WIKINER-Malayalam | 8 | 0 | transformers | 13,291 | Entry not found |
GioReg/AlbertoBertsentipol | c14fb16800e7b06f733fdff5a9476f5b7668a832 | 2022-04-07T10:10:03.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/AlbertoBertsentipol | 8 | null | transformers | 13,292 | ---
tags:
- generated_from_trainer
model-index:
- name: AlbertoBertsentipol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AlbertoBertsentipol
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
SupritiVijay/fake-news-detector | 7b25153bc61c54e57b90672c02ca60b28b9aa084 | 2022-04-07T11:34:49.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | SupritiVijay | null | SupritiVijay/fake-news-detector | 8 | null | transformers | 13,293 | Entry not found |
GioReg/BertMultiHateSpeech | 57b582184be0564dbfebada2c5144b08672507af | 2022-04-15T11:10:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/BertMultiHateSpeech | 8 | null | transformers | 13,294 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: BertMultiHateSpeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertMultiHateSpeech
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7496
- Accuracy: 0.74
- F1: 0.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Hodiden/autotrain-TestProj-722121991 | 05c72debd08961a29cbac03157ec5d51de10fb0d | 2022-04-09T19:21:44.000Z | [
"pytorch",
"t5",
"text2text-generation",
"unk",
"dataset:Hodiden/autotrain-data-TestProj",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | Hodiden | null | Hodiden/autotrain-TestProj-722121991 | 8 | null | transformers | 13,295 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hodiden/autotrain-data-TestProj
co2_eq_emissions: 8.052949236815056
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 722121991
- CO2 Emissions (in grams): 8.052949236815056
## Validation Metrics
- Loss: 1.123626708984375
- Rouge1: 56.1275
- Rouge2: 33.5648
- RougeL: 51.986
- RougeLsum: 51.9943
- Gen Len: 13.2823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hodiden/autotrain-TestProj-722121991
``` |
HenryHXR/t5-base-finetuned-scitldr-only-abstract | 40c687aa2b1aa125cbd82ebc1227c4a72ce2dc8f | 2022-04-09T08:16:33.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | HenryHXR | null | HenryHXR/t5-base-finetuned-scitldr-only-abstract | 8 | null | transformers | 13,296 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-scitldr-only-abstract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr-only-abstract
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3365
- Rouge1: 34.3531
- Rouge2: 15.7554
- Rougel: 29.8918
- Rougelsum: 29.9514
- Gen Len: 18.7658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.76 | 1.0 | 996 | 2.3649 | 34.0043 | 15.5031 | 29.4997 | 29.5576 | 18.7835 |
| 2.4843 | 2.0 | 1992 | 2.3365 | 34.3531 | 15.7554 | 29.8918 | 29.9514 | 18.7658 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Jatin-WIAI/gujarati_relevance_clf | 4c751d515a7800a05e565a62938348df92ecf3aa | 2022-04-11T08:11:38.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | false | Jatin-WIAI | null | Jatin-WIAI/gujarati_relevance_clf | 8 | null | transformers | 13,297 | Entry not found |
Vipitis/CodeGPT-small-java-adaptedGPT2-transfer-shadertoys | 9a3f49b9fdc2e29ce1ee4550b43e7ee1ea402bcc | 2022-04-20T13:53:46.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | Vipitis | null | Vipitis/CodeGPT-small-java-adaptedGPT2-transfer-shadertoys | 8 | null | transformers | 13,298 | fine-tuned for less than a full epoch to generate shadercode (with Shadertoy.com style uniforms).
dataset used: https://huggingface.co/datasets/Vipitis/Shadertoys-bimodal |
fmesa/mi-modelo-bacan-test | bbb334f2e491c9b7dc5e3f260a0eef5c45b15a5a | 2022-04-12T02:55:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | fmesa | null | fmesa/mi-modelo-bacan-test | 8 | null | transformers | 13,299 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: mi-modelo-bacan-test
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8825396825396825
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-modelo-bacan-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3318
- Accuracy: 0.8767
- F1: 0.8825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.