modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Yarn/finetuned | 9aef62b1c0f666203209fbc394e8407ab6cec7fd | 2022-06-24T09:09:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Yarn | null | Yarn/finetuned | 4 | null | transformers | 20,300 | Entry not found |
domenicrosati/BioM-ALBERT-xxlarge-finetuned-DAGPap22 | b9c069fa5f6b5b6c3c8d469de931807239fa7cb8 | 2022-06-24T19:54:01.000Z | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/BioM-ALBERT-xxlarge-finetuned-DAGPap22 | 4 | null | transformers | 20,301 | ---
tags:
- text-classification
- generated_from_trainer
model-index:
- name: BioM-ALBERT-xxlarge-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioM-ALBERT-xxlarge-finetuned-DAGPap22
This model is a fine-tuned version of [sultan/BioM-ALBERT-xxlarge](https://huggingface.co/sultan/BioM-ALBERT-xxlarge) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509 | 1b27fed95d1885253cb841cbaa55ef771dec18dd | 2022-06-24T17:25:50.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:deepesh0x/autotrain-data-bert_wikipedia_sst_2",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509 | 4 | null | transformers | 20,302 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-bert_wikipedia_sst_2
co2_eq_emissions: 17.051424016530056
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034235509
- CO2 Emissions (in grams): 17.051424016530056
## Validation Metrics
- Loss: 0.14414940774440765
- Accuracy: 0.954046028210839
- Precision: 0.9583831937242387
- Recall: 0.9592760180995475
- AUC: 0.9872623710421541
- F1: 0.9588293980711673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513 | 5a5090f6edb3710eed1a5482cb3c10ee28cf4157 | 2022-06-24T17:25:28.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:deepesh0x/autotrain-data-bert_wikipedia_sst_2",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513 | 4 | null | transformers | 20,303 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-bert_wikipedia_sst_2
co2_eq_emissions: 16.686945384446037
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034235513
- CO2 Emissions (in grams): 16.686945384446037
## Validation Metrics
- Loss: 0.14450643956661224
- Accuracy: 0.9527839643652561
- Precision: 0.9565852363250132
- Recall: 0.9588767633750332
- AUC: 0.9872179498202862
- F1: 0.9577296291373122
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235513", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
deepesh0x/autotrain-finetunedmodelbert-1034335535 | 4b1235dd4479ab9c3afe79d6fa78b73447afa171 | 2022-06-24T18:00:54.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:deepesh0x/autotrain-data-finetunedmodelbert",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-finetunedmodelbert-1034335535 | 4 | null | transformers | 20,304 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-finetunedmodelbert
co2_eq_emissions: 7.1805069109958835
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034335535
- CO2 Emissions (in grams): 7.1805069109958835
## Validation Metrics
- Loss: 0.05866553634405136
- Accuracy: 0.9793615441722346
- Precision: 0.9811170212765957
- Recall: 0.9819004524886877
- AUC: 0.9976735725727466
- F1: 0.9815085805507516
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-finetunedmodelbert-1034335535
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-finetunedmodelbert-1034335535", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-finetunedmodelbert-1034335535", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
deepesh0x/autotrain-finetunedmodel1-1034535555 | 18a419b24858c2ce8550c4809b95c0756cd56942 | 2022-06-24T18:57:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:deepesh0x/autotrain-data-finetunedmodel1",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-finetunedmodel1-1034535555 | 4 | null | transformers | 20,305 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-finetunedmodel1
co2_eq_emissions: 29.194903746653306
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1034535555
- CO2 Emissions (in grams): 29.194903746653306
## Validation Metrics
- Loss: 0.16423887014389038
- Accuracy: 0.9402375649591685
- Precision: 0.94876254180602
- Recall: 0.9438381687516636
- AUC: 0.9843968335444757
- F1: 0.9462939488958569
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-finetunedmodel1-1034535555
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-finetunedmodel1-1034535555", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-finetunedmodel1-1034535555", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
domenicrosati/deberta-v3-xsmall-finetuned-DAGPap22 | 87c52edc4beeeee10f2d6ee77e18b69ff0b16fba | 2022-06-25T00:13:38.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-xsmall-finetuned-DAGPap22 | 4 | null | transformers | 20,306 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-xsmall-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-finetuned-DAGPap22
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0798
- Accuracy: 0.9907
- F1: 0.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 402 | 0.1626 | 0.9477 | 0.9616 |
| 0.4003 | 2.0 | 804 | 0.0586 | 0.9794 | 0.9853 |
| 0.1075 | 3.0 | 1206 | 0.0342 | 0.9907 | 0.9933 |
| 0.0581 | 4.0 | 1608 | 0.1140 | 0.9776 | 0.9838 |
| 0.0245 | 5.0 | 2010 | 0.1409 | 0.9776 | 0.9842 |
| 0.0245 | 6.0 | 2412 | 0.0732 | 0.9832 | 0.9881 |
| 0.0167 | 7.0 | 2814 | 0.1996 | 0.9682 | 0.9778 |
| 0.0139 | 8.0 | 3216 | 0.1219 | 0.9850 | 0.9894 |
| 0.006 | 9.0 | 3618 | 0.0670 | 0.9907 | 0.9934 |
| 0.0067 | 10.0 | 4020 | 0.1036 | 0.9869 | 0.9907 |
| 0.0067 | 11.0 | 4422 | 0.1220 | 0.9776 | 0.9838 |
| 0.0041 | 12.0 | 4824 | 0.1768 | 0.9776 | 0.9839 |
| 0.0007 | 13.0 | 5226 | 0.0943 | 0.9888 | 0.9920 |
| 0.0 | 14.0 | 5628 | 0.0959 | 0.9907 | 0.9934 |
| 0.0054 | 15.0 | 6030 | 0.0915 | 0.9888 | 0.9921 |
| 0.0054 | 16.0 | 6432 | 0.1618 | 0.9794 | 0.9855 |
| 0.0019 | 17.0 | 6834 | 0.0794 | 0.9907 | 0.9934 |
| 0.0 | 18.0 | 7236 | 0.0799 | 0.9907 | 0.9934 |
| 0.0 | 19.0 | 7638 | 0.0797 | 0.9907 | 0.9934 |
| 0.0 | 20.0 | 8040 | 0.0798 | 0.9907 | 0.9934 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
abhishek/convnext-tiny-finetuned-dogfood | b16339411e0dc86d6fb28d08e070f7d75d50fa6e | 2022-06-27T11:01:31.000Z | [
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"dataset:imagefolder",
"dataset:lewtun/dog_food",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | abhishek | null | abhishek/convnext-tiny-finetuned-dogfood | 4 | null | transformers | 20,307 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
- lewtun/dog_food
metrics:
- accuracy
model-index:
- name: convnext-tiny-finetuned-dogfood
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
args: lewtun--dog_food
metrics:
- name: Accuracy
type: accuracy
value: 0.7253333333333334
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.6866666666666666
verified: true
- name: Precision Macro
type: precision
value: 0.7181484576740136
verified: true
- name: Precision Micro
type: precision
value: 0.6866666666666666
verified: true
- name: Precision Weighted
type: precision
value: 0.7235392474854474
verified: true
- name: Recall Macro
type: recall
value: 0.7006250320552644
verified: true
- name: Recall Micro
type: recall
value: 0.6866666666666666
verified: true
- name: Recall Weighted
type: recall
value: 0.6866666666666666
verified: true
- name: F1 Macro
type: f1
value: 0.6690027379410202
verified: true
- name: F1 Micro
type: f1
value: 0.6866666666666666
verified: true
- name: F1 Weighted
type: f1
value: 0.6647526870157503
verified: true
- name: loss
type: loss
value: 0.9549381732940674
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.5737269361889515
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-finetuned-dogfood
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the lewtun/dog_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9277
- Accuracy: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0681 | 1.0 | 16 | 0.9125 | 0.7422 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
asahi417/lmqg-mbart-large-cc25-esquad | 48607a7985d30c2436644a5d8f4b6c515b446f3a | 2022-06-26T14:11:13.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mbart-large-cc25-esquad | 4 | null | transformers | 20,308 | Entry not found |
dasolj/wav2vec2-base-timit-demo-google-colab | 314eaddc13d531c5549da3c80b133b732d082752 | 2022-06-27T08:50:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | dasolj | null | dasolj/wav2vec2-base-timit-demo-google-colab | 4 | null | transformers | 20,309 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5501
- Wer: 0.3424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5448 | 1.0 | 500 | 2.5044 | 1.0 |
| 1.0167 | 2.01 | 1000 | 0.5435 | 0.5278 |
| 0.4453 | 3.01 | 1500 | 0.4450 | 0.4534 |
| 0.3 | 4.02 | 2000 | 0.4401 | 0.4245 |
| 0.2304 | 5.02 | 2500 | 0.4146 | 0.4022 |
| 0.1889 | 6.02 | 3000 | 0.4241 | 0.3927 |
| 0.1573 | 7.03 | 3500 | 0.4545 | 0.3878 |
| 0.1363 | 8.03 | 4000 | 0.4936 | 0.3940 |
| 0.1213 | 9.04 | 4500 | 0.4964 | 0.3806 |
| 0.108 | 10.04 | 5000 | 0.4931 | 0.3826 |
| 0.0982 | 11.04 | 5500 | 0.5373 | 0.3778 |
| 0.0883 | 12.05 | 6000 | 0.4978 | 0.3733 |
| 0.0835 | 13.05 | 6500 | 0.5189 | 0.3728 |
| 0.0748 | 14.06 | 7000 | 0.4608 | 0.3692 |
| 0.068 | 15.06 | 7500 | 0.4827 | 0.3608 |
| 0.0596 | 16.06 | 8000 | 0.5022 | 0.3661 |
| 0.056 | 17.07 | 8500 | 0.5482 | 0.3646 |
| 0.0565 | 18.07 | 9000 | 0.5158 | 0.3573 |
| 0.0487 | 19.08 | 9500 | 0.4910 | 0.3513 |
| 0.0444 | 20.08 | 10000 | 0.5771 | 0.3580 |
| 0.045 | 21.08 | 10500 | 0.5160 | 0.3539 |
| 0.0363 | 22.09 | 11000 | 0.5367 | 0.3503 |
| 0.0313 | 23.09 | 11500 | 0.5773 | 0.3500 |
| 0.0329 | 24.1 | 12000 | 0.5683 | 0.3508 |
| 0.0297 | 25.1 | 12500 | 0.5355 | 0.3464 |
| 0.0272 | 26.1 | 13000 | 0.5317 | 0.3450 |
| 0.0256 | 27.11 | 13500 | 0.5602 | 0.3443 |
| 0.0242 | 28.11 | 14000 | 0.5586 | 0.3419 |
| 0.0239 | 29.12 | 14500 | 0.5501 | 0.3424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
hidude562/gpt2-discordgpt2 | 0f165f6cebc299e3396170f76161988c9444937c | 2022-06-27T20:52:25.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | hidude562 | null | hidude562/gpt2-discordgpt2 | 4 | null | transformers | 20,310 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-discordgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-discordgpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.3032
- eval_runtime: 59.2004
- eval_samples_per_second: 274.542
- eval_steps_per_second: 34.324
- epoch: 0.26
- step: 25500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
deepesh0x/autotrain-glue1-1046836019 | 135e2ddd7d319c950b55cc9daa9ec449662fd7a4 | 2022-06-27T23:59:33.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:deepesh0x/autotrain-data-glue1",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | deepesh0x | null | deepesh0x/autotrain-glue1-1046836019 | 4 | null | transformers | 20,311 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- deepesh0x/autotrain-data-glue1
co2_eq_emissions: 3.869994913020229
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1046836019
- CO2 Emissions (in grams): 3.869994913020229
## Validation Metrics
- Loss: 0.626447856426239
- Accuracy: 0.6606574761399788
- Precision: 0.6925845932325414
- Recall: 0.8187234042553192
- AUC: 0.656404823892031
- F1: 0.750390015600624
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-glue1-1046836019
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-glue1-1046836019", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-glue1-1046836019", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
profoz/covid | ed3b277cf1aea7c00274d3b846b89148db7d8530 | 2022-07-10T08:48:46.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | profoz | null | profoz/covid | 4 | null | transformers | 20,312 | Entry not found |
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1 | e42421a44aa7d6070c3abf1909cd316befa88c29 | 2022-06-29T01:00:45.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | gary109 | null | gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1 | 4 | 1 | transformers | 20,313 | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Wer: 0.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2609 | 1.0 | 280 | 0.2313 | 0.1376 |
| 0.2297 | 2.0 | 560 | 0.2240 | 0.1397 |
| 0.1951 | 3.0 | 840 | 0.2280 | 0.1361 |
| 0.1816 | 4.0 | 1120 | 0.2215 | 0.1282 |
| 0.1634 | 5.0 | 1400 | 0.2180 | 0.1240 |
| 0.1338 | 6.0 | 1680 | 0.2226 | 0.1241 |
| 0.1411 | 7.0 | 1960 | 0.2143 | 0.1211 |
| 0.1143 | 8.0 | 2240 | 0.2181 | 0.1174 |
| 0.1127 | 9.0 | 2520 | 0.2215 | 0.1167 |
| 0.105 | 10.0 | 2800 | 0.2196 | 0.1160 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
RodrigoGuerra/bert-base-spanish-wwm-uncased-finetuned-clinical | 6259856a8c216613efa38d5e2e8d2b3706b0fee7 | 2022-06-29T05:26:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | RodrigoGuerra | null | RodrigoGuerra/bert-base-spanish-wwm-uncased-finetuned-clinical | 4 | null | transformers | 20,314 | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-clinical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-clinical
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7962
- F1: 0.1081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 1.1202 | 1.0 | 2007 | 1.0018 | 0.0062 |
| 1.0153 | 2.0 | 4014 | 0.9376 | 0.0166 |
| 0.9779 | 3.0 | 6021 | 0.9026 | 0.0342 |
| 0.9598 | 4.0 | 8028 | 0.8879 | 0.0337 |
| 0.9454 | 5.0 | 10035 | 0.8699 | 0.0598 |
| 0.9334 | 6.0 | 12042 | 0.8546 | 0.0682 |
| 0.9263 | 7.0 | 14049 | 0.8533 | 0.0551 |
| 0.9279 | 8.0 | 16056 | 0.8538 | 0.0715 |
| 0.9184 | 9.0 | 18063 | 0.8512 | 0.0652 |
| 0.9151 | 10.0 | 20070 | 0.8313 | 0.0789 |
| 0.9092 | 11.0 | 22077 | 0.8299 | 0.0838 |
| 0.9083 | 12.0 | 24084 | 0.8331 | 0.0718 |
| 0.9057 | 13.0 | 26091 | 0.8319 | 0.0719 |
| 0.9018 | 14.0 | 28098 | 0.8133 | 0.0969 |
| 0.9068 | 15.0 | 30105 | 0.8234 | 0.0816 |
| 0.9034 | 16.0 | 32112 | 0.8151 | 0.0899 |
| 0.9008 | 17.0 | 34119 | 0.8145 | 0.0967 |
| 0.8977 | 18.0 | 36126 | 0.8168 | 0.0891 |
| 0.898 | 19.0 | 38133 | 0.8167 | 0.0818 |
| 0.8956 | 20.0 | 40140 | 0.8076 | 0.1030 |
| 0.8983 | 21.0 | 42147 | 0.8129 | 0.0867 |
| 0.896 | 22.0 | 44154 | 0.8118 | 0.0892 |
| 0.8962 | 23.0 | 46161 | 0.8066 | 0.1017 |
| 0.8917 | 24.0 | 48168 | 0.8154 | 0.0908 |
| 0.8923 | 25.0 | 50175 | 0.8154 | 0.0897 |
| 0.8976 | 26.0 | 52182 | 0.8089 | 0.0910 |
| 0.8926 | 27.0 | 54189 | 0.8069 | 0.0947 |
| 0.8911 | 28.0 | 56196 | 0.8170 | 0.0882 |
| 0.8901 | 29.0 | 58203 | 0.7991 | 0.1112 |
| 0.8934 | 30.0 | 60210 | 0.7996 | 0.1112 |
| 0.8903 | 31.0 | 62217 | 0.8049 | 0.0950 |
| 0.8924 | 32.0 | 64224 | 0.8116 | 0.0951 |
| 0.8887 | 33.0 | 66231 | 0.7982 | 0.1075 |
| 0.8922 | 34.0 | 68238 | 0.8013 | 0.1025 |
| 0.8871 | 35.0 | 70245 | 0.8064 | 0.0979 |
| 0.8913 | 36.0 | 72252 | 0.8108 | 0.0909 |
| 0.8924 | 37.0 | 74259 | 0.8081 | 0.0889 |
| 0.8848 | 38.0 | 76266 | 0.7923 | 0.1228 |
| 0.8892 | 39.0 | 78273 | 0.8025 | 0.0959 |
| 0.8886 | 40.0 | 80280 | 0.7954 | 0.1148 |
| 0.8938 | 41.0 | 82287 | 0.8017 | 0.1058 |
| 0.8897 | 42.0 | 84294 | 0.7946 | 0.1146 |
| 0.8906 | 43.0 | 86301 | 0.7983 | 0.1102 |
| 0.889 | 44.0 | 88308 | 0.8068 | 0.0950 |
| 0.8872 | 45.0 | 90315 | 0.7999 | 0.1089 |
| 0.8902 | 46.0 | 92322 | 0.7992 | 0.0999 |
| 0.8912 | 47.0 | 94329 | 0.7981 | 0.1048 |
| 0.886 | 48.0 | 96336 | 0.8024 | 0.0991 |
| 0.8848 | 49.0 | 98343 | 0.8026 | 0.0984 |
| 0.8866 | 50.0 | 100350 | 0.7965 | 0.1135 |
| 0.8848 | 51.0 | 102357 | 0.8054 | 0.0926 |
| 0.8863 | 52.0 | 104364 | 0.8068 | 0.0917 |
| 0.8866 | 53.0 | 106371 | 0.7993 | 0.0964 |
| 0.8823 | 54.0 | 108378 | 0.7929 | 0.1126 |
| 0.8911 | 55.0 | 110385 | 0.7938 | 0.1132 |
| 0.8911 | 56.0 | 112392 | 0.7932 | 0.1144 |
| 0.8866 | 57.0 | 114399 | 0.8018 | 0.0957 |
| 0.8841 | 58.0 | 116406 | 0.7976 | 0.1015 |
| 0.8874 | 59.0 | 118413 | 0.8035 | 0.0966 |
| 0.887 | 60.0 | 120420 | 0.7954 | 0.1112 |
| 0.888 | 61.0 | 122427 | 0.7927 | 0.1164 |
| 0.8845 | 62.0 | 124434 | 0.7982 | 0.1012 |
| 0.8848 | 63.0 | 126441 | 0.7978 | 0.1034 |
| 0.8857 | 64.0 | 128448 | 0.8036 | 0.0969 |
| 0.8827 | 65.0 | 130455 | 0.7958 | 0.1036 |
| 0.8878 | 66.0 | 132462 | 0.7983 | 0.1030 |
| 0.885 | 67.0 | 134469 | 0.7956 | 0.1055 |
| 0.8859 | 68.0 | 136476 | 0.7964 | 0.1058 |
| 0.8872 | 69.0 | 138483 | 0.7989 | 0.1005 |
| 0.8841 | 70.0 | 140490 | 0.7949 | 0.1138 |
| 0.8846 | 71.0 | 142497 | 0.7960 | 0.1062 |
| 0.8867 | 72.0 | 144504 | 0.7965 | 0.1058 |
| 0.8856 | 73.0 | 146511 | 0.7980 | 0.1007 |
| 0.8852 | 74.0 | 148518 | 0.7971 | 0.1012 |
| 0.8841 | 75.0 | 150525 | 0.7975 | 0.1049 |
| 0.8865 | 76.0 | 152532 | 0.7981 | 0.1010 |
| 0.8887 | 77.0 | 154539 | 0.7945 | 0.1095 |
| 0.8853 | 78.0 | 156546 | 0.7965 | 0.1053 |
| 0.8843 | 79.0 | 158553 | 0.7966 | 0.1062 |
| 0.8858 | 80.0 | 160560 | 0.7962 | 0.1081 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Gunulhona/tbecmodel_v2 | 97ce7e76018e1ab7c2ebf2e5884dddbdc84ac145 | 2022-06-29T06:59:52.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | Gunulhona | null | Gunulhona/tbecmodel_v2 | 4 | null | transformers | 20,315 | Entry not found |
ambekarsameer/distilbert-base-uncased-finetuned-cola | 2e470a0fa5a373b4f2a383abc15e17803e7e78f2 | 2022-06-29T08:26:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | ambekarsameer | null | ambekarsameer/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 20,316 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5337700382788287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8051
- Matthews Correlation: 0.5338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5233 | 1.0 | 535 | 0.5324 | 0.4151 |
| 0.3489 | 2.0 | 1070 | 0.5132 | 0.4836 |
| 0.2392 | 3.0 | 1605 | 0.5852 | 0.5177 |
| 0.1822 | 4.0 | 2140 | 0.7485 | 0.5256 |
| 0.1382 | 5.0 | 2675 | 0.8051 | 0.5338 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anahitapld/electra-base-dbd | 5a2a5788a9d906ae6f71a40fac609d39096db660 | 2022-06-29T08:58:58.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | anahitapld | null | anahitapld/electra-base-dbd | 4 | null | transformers | 20,317 | ---
license: apache-2.0
---
|
ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-512 | 0822156de4c6970ee65bfbf524adbfc558b62f62 | 2022-06-29T17:53:54.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ghadeermobasher | null | ghadeermobasher/BioRed-Dis-Modified-PubMedBERT-512 | 4 | null | transformers | 20,318 | Entry not found |
SivilTaram/poet-sql-roberta | 6279a62ce3269ec6f63e5274a906cf269cdbb081 | 2022-06-30T07:32:18.000Z | [
"pytorch",
"roberta",
"transformers",
"license:mit"
] | null | false | SivilTaram | null | SivilTaram/poet-sql-roberta | 4 | null | transformers | 20,319 | ---
license: mit
---
|
SivilTaram/tapex-t5-large-lm-adapt | 79611ba0781501978e27821b6cbf105a2bde958e | 2022-06-30T08:40:00.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | SivilTaram | null | SivilTaram/tapex-t5-large-lm-adapt | 4 | null | transformers | 20,320 | ---
license: mit
---
|
huggingtweets/orangebook_ | 41b22a198c0b2dc38db2fccd5214e99ece0f25ec | 2022-06-30T15:06:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/orangebook_ | 4 | null | transformers | 20,321 | ---
language: en
thumbnail: http://www.huggingtweets.com/orangebook_/1656601586971/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1211957929915629569/5woqqbsM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Orange Book 🍊📖</div>
<div style="text-align: center; font-size: 14px;">@orangebook_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Orange Book 🍊📖.
| Data | Orange Book 🍊📖 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 4 |
| Short tweets | 1 |
| Tweets kept | 3245 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1fgnauay/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @orangebook_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18larep5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18larep5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/orangebook_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
asahi417/lmqg-mbart-large-cc25-dequad | f269a6c0744d07a492a6637d3ea7e151bd3761bf | 2022-07-01T00:39:35.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mbart-large-cc25-dequad | 4 | null | transformers | 20,322 | Entry not found |
huggingtweets/tacticalmaid | ba1dfd37c3618bea5a0c056e436e506a597073d4 | 2022-07-01T11:50:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/tacticalmaid | 4 | null | transformers | 20,323 | ---
language: en
thumbnail: http://www.huggingtweets.com/tacticalmaid/1656676226544/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498996796093509632/Z7VwFzOJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maid POLadin 🎪 💙💛</div>
<div style="text-align: center; font-size: 14px;">@tacticalmaid</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maid POLadin 🎪 💙💛.
| Data | Maid POLadin 🎪 💙💛 |
| --- | --- |
| Tweets downloaded | 3225 |
| Retweets | 2084 |
| Short tweets | 291 |
| Tweets kept | 850 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fitf7s7t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tacticalmaid's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1swgks0j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1swgks0j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tacticalmaid')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
srcocotero/bert-qa-en | ef4523f8fa338c121f18c1515f8beb4a2d93260f | 2022-07-02T15:30:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | srcocotero | null | srcocotero/bert-qa-en | 4 | null | transformers | 20,324 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-qa-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-qa-en
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Neha2608/xlm-roberta-base-finetuned-panx-de | e42814b37f12952098272f197b606f17f546aad7 | 2022-07-02T11:11:20.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Neha2608 | null | Neha2608/xlm-roberta-base-finetuned-panx-de | 4 | null | transformers | 20,325 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8627004891366169
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 |
| 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 |
| 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Neha2608/xlm-roberta-base-finetuned-panx-it | 2c57e067ca5c568dd680bd2157a4d1b3b4000a5e | 2022-07-02T12:17:06.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | Neha2608 | null | Neha2608/xlm-roberta-base-finetuned-panx-it | 4 | null | transformers | 20,326 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
tner/roberta-large-tweetner-2020-2021-continuous | c95db0a6f982b86169df79e0ced943fe1446729a | 2022-07-11T23:32:07.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/roberta-large-tweetner-2020-2021-continuous | 4 | null | transformers | 20,327 | Entry not found |
erickfm/zesty-sweep-2 | b3e841cc58d7deb37b4644fd6e1549a21461043b | 2022-07-03T09:54:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | erickfm | null | erickfm/zesty-sweep-2 | 4 | null | transformers | 20,328 | Entry not found |
anuj55/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-polifact | 750df87ce1730d2d716320275bdc104058d18af2 | 2022-07-04T15:39:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | anuj55 | null | anuj55/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-polifact | 4 | null | transformers | 20,329 | Entry not found |
LACAI/roberta-base-PFG-progression | 5b7659832bebceb78da7be206bc6eb0188377c09 | 2022-07-04T18:48:17.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | LACAI | null | LACAI/roberta-base-PFG-progression | 4 | null | transformers | 20,330 | ---
license: mit
---
Base model: [roberta-base](https://huggingface.co/roberta-base)
Fine tuned as a progression model (to predict the acceptability of a dialogue) on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019):
Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task.
**Example input**: `How are you?</s>Good! how about yourself?</s>Great. Would you like to donate today to help the children?</s>`
For more context and usage information see [https://github.rpi.edu/LACAI/dialogue-progression](https://github.rpi.edu/LACAI/dialogue-progression). |
jdang/distilbert-base-uncased-finetuned-clinc | 0582422bbb3c1f6747af68c22d89b0d31162f81c | 2022-07-05T14:14:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jdang | null | jdang/distilbert-base-uncased-finetuned-clinc | 4 | null | transformers | 20,331 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2891 | 0.7429 |
| 2.6283 | 2.0 | 636 | 1.8755 | 0.8374 |
| 1.5481 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.0149 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ryo0634/luke-base-embedding_predictor-concat-20181220 | c862d5ff170f6a5856dbbf4118e247306d41f088 | 2022-07-05T12:34:55.000Z | [
"pytorch",
"luke",
"transformers"
] | null | false | ryo0634 | null | ryo0634/luke-base-embedding_predictor-concat-20181220 | 4 | null | transformers | 20,332 | Entry not found |
chiranthans23/distilbert-base-uncased-finetuned-clinc | 4ecded5d6e38c3b2e42a4f63da6fde2e16caf3ce | 2022-07-05T17:24:46.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | chiranthans23 | null | chiranthans23/distilbert-base-uncased-finetuned-clinc | 4 | null | transformers | 20,333 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2890 | 0.7429 |
| 3.7868 | 2.0 | 636 | 1.8756 | 0.8374 |
| 3.7868 | 3.0 | 954 | 1.1571 | 0.8961 |
| 1.6929 | 4.0 | 1272 | 0.8574 | 0.9132 |
| 0.9057 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Krisna/finetuning-sentiment-model-3000-samples | 850e384df8fa21474ac0d6822d0683599e6c9b20 | 2022-07-05T20:14:31.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Krisna | null | Krisna/finetuning-sentiment-model-3000-samples | 4 | null | transformers | 20,334 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.86
- F1: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
jdang/distilbert-base-uncased-distilled-clinc | 7dc4e811771fbd186b5523c57751b2a12ad12dbb | 2022-07-05T16:23:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jdang | null | jdang/distilbert-base-uncased-distilled-clinc | 4 | null | transformers | 20,335 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9351612903225807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0562
- Accuracy: 0.9352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5802 | 1.0 | 318 | 0.3269 | 0.6658 |
| 0.264 | 2.0 | 636 | 0.1590 | 0.8616 |
| 0.1571 | 3.0 | 954 | 0.1035 | 0.9113 |
| 0.1155 | 4.0 | 1272 | 0.0799 | 0.9223 |
| 0.0947 | 5.0 | 1590 | 0.0686 | 0.9268 |
| 0.0839 | 6.0 | 1908 | 0.0624 | 0.9310 |
| 0.0772 | 7.0 | 2226 | 0.0589 | 0.9323 |
| 0.0733 | 8.0 | 2544 | 0.0569 | 0.9355 |
| 0.0713 | 9.0 | 2862 | 0.0562 | 0.9352 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
annahaz/xlm-roberta-base-finetuned-misogyny-en-it | 48923e71382d84780e73abf234f348134eba26d5 | 2022-07-05T23:46:13.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | annahaz | null | annahaz/xlm-roberta-base-finetuned-misogyny-en-it | 4 | null | transformers | 20,336 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-en-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-en-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0275
- Accuracy: 0.9949
- F1: 0.9948
- Precision: 0.9906
- Recall: 0.9989
- Mae: 0.0051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3412 | 1.0 | 1006 | 0.4817 | 0.7744 | 0.8023 | 0.6930 | 0.9526 | 0.2256 |
| 0.2633 | 2.0 | 2012 | 0.5045 | 0.7709 | 0.8048 | 0.6813 | 0.9832 | 0.2291 |
| 0.2286 | 3.0 | 3018 | 0.2252 | 0.9256 | 0.9253 | 0.8940 | 0.9589 | 0.0744 |
| 0.2189 | 4.0 | 4024 | 0.1373 | 0.9565 | 0.9546 | 0.9576 | 0.9516 | 0.0435 |
| 0.1424 | 5.0 | 5030 | 0.1143 | 0.9742 | 0.9735 | 0.9620 | 0.9853 | 0.0258 |
| 0.1655 | 6.0 | 6036 | 0.0787 | 0.9818 | 0.9813 | 0.9711 | 0.9916 | 0.0182 |
| 0.0843 | 7.0 | 7042 | 0.0739 | 0.9833 | 0.9829 | 0.9683 | 0.9979 | 0.0167 |
| 0.081 | 8.0 | 8048 | 0.0468 | 0.9894 | 0.9891 | 0.9794 | 0.9989 | 0.0106 |
| 0.047 | 9.0 | 9054 | 0.0390 | 0.9914 | 0.9911 | 0.9834 | 0.9989 | 0.0086 |
| 0.0198 | 10.0 | 10060 | 0.0275 | 0.9949 | 0.9948 | 0.9906 | 0.9989 | 0.0051 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-en-it | 0fbc11e4d1f088cf43df2abedb5cf14e99496e65 | 2022-07-06T00:52:48.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | annahaz | null | annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-en-it | 4 | null | transformers | 20,337 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-multilingual-cased-finetuned-misogyny-en-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-misogyny-en-it
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0096
- Accuracy: 0.9985
- F1: 0.9984
- Precision: 0.9969
- Recall: 1.0
- Mae: 0.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3169 | 1.0 | 1006 | 0.3924 | 0.8154 | 0.8322 | 0.7388 | 0.9526 | 0.1846 |
| 0.2567 | 2.0 | 2012 | 0.3045 | 0.8700 | 0.8779 | 0.8 | 0.9726 | 0.1300 |
| 0.1829 | 3.0 | 3018 | 0.1385 | 0.9525 | 0.9524 | 0.9172 | 0.9905 | 0.0475 |
| 0.1465 | 4.0 | 4024 | 0.0465 | 0.9863 | 0.9858 | 0.9822 | 0.9895 | 0.0137 |
| 0.0683 | 5.0 | 5030 | 0.0290 | 0.9939 | 0.9937 | 0.9885 | 0.9989 | 0.0061 |
| 0.06 | 6.0 | 6036 | 0.0232 | 0.9949 | 0.9948 | 0.9916 | 0.9979 | 0.0051 |
| 0.0195 | 7.0 | 7042 | 0.0189 | 0.9965 | 0.9963 | 0.9927 | 1.0 | 0.0035 |
| 0.0172 | 8.0 | 8048 | 0.0105 | 0.9980 | 0.9979 | 0.9958 | 1.0 | 0.0020 |
| 0.0248 | 9.0 | 9054 | 0.0099 | 0.9980 | 0.9979 | 0.9958 | 1.0 | 0.0020 |
| 0.0058 | 10.0 | 10060 | 0.0096 | 0.9985 | 0.9984 | 0.9969 | 1.0 | 0.0015 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DongHyoungLee/bluebert-sitesentence-diagnosis-classification | 5998b47480a1cd37301ba73cceb47022a3bf9eac | 2022-07-06T09:09:03.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DongHyoungLee | null | DongHyoungLee/bluebert-sitesentence-diagnosis-classification | 4 | null | transformers | 20,338 | Entry not found |
SiddharthaM/beit-base-patch16-224-pt22k-ft22k-rim_one-new | 1dbf3c79c9b3904d0d4074b3c8a4c77b3048c570 | 2022-07-06T11:17:32.000Z | [
"pytorch",
"tensorboard",
"beit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | SiddharthaM | null | SiddharthaM/beit-base-patch16-224-pt22k-ft22k-rim_one-new | 4 | null | transformers | 20,339 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-pt22k-ft22k-rim_one-new
results:
- task:
type: image-classification
name: Image Classification
dataset:
type: rimonedl
name: RIM ONE DL
split: test
metrics:
- type: f1
value: 0.9197860962566845
name: F1
- task:
type: image-classification
name: Image Classification
dataset:
type: rim one
name: RIMONEDL
split: test
metrics:
- type: precision
value: 0.9247311827956989
name: precision
- type: recall
value: 0.9148936170212766
name: Recall
- type: accuracy
value: 0.8972602739726028
name: Accuracy
- type: roc_auc
value: 0.8901391162029461
name: AUC
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-rim_one-new
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4550
- Accuracy: 0.8767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 0.2411 | 0.9178 |
| No log | 1.73 | 4 | 0.2182 | 0.8973 |
| No log | 2.73 | 6 | 0.3085 | 0.8973 |
| No log | 3.73 | 8 | 0.2794 | 0.8973 |
| 0.1392 | 4.73 | 10 | 0.2398 | 0.9110 |
| 0.1392 | 5.73 | 12 | 0.2925 | 0.8973 |
| 0.1392 | 6.73 | 14 | 0.2798 | 0.9110 |
| 0.1392 | 7.73 | 16 | 0.2184 | 0.9178 |
| 0.1392 | 8.73 | 18 | 0.3007 | 0.9110 |
| 0.0416 | 9.73 | 20 | 0.3344 | 0.9041 |
| 0.0416 | 10.73 | 22 | 0.3626 | 0.9110 |
| 0.0416 | 11.73 | 24 | 0.4842 | 0.8904 |
| 0.0416 | 12.73 | 26 | 0.3664 | 0.8973 |
| 0.0416 | 13.73 | 28 | 0.3458 | 0.9110 |
| 0.0263 | 14.73 | 30 | 0.2810 | 0.9110 |
| 0.0263 | 15.73 | 32 | 0.4695 | 0.8699 |
| 0.0263 | 16.73 | 34 | 0.3723 | 0.9041 |
| 0.0263 | 17.73 | 36 | 0.3447 | 0.9041 |
| 0.0263 | 18.73 | 38 | 0.3708 | 0.8904 |
| 0.0264 | 19.73 | 40 | 0.4052 | 0.9110 |
| 0.0264 | 20.73 | 42 | 0.4492 | 0.9041 |
| 0.0264 | 21.73 | 44 | 0.4649 | 0.8904 |
| 0.0264 | 22.73 | 46 | 0.4061 | 0.9178 |
| 0.0264 | 23.73 | 48 | 0.4136 | 0.9110 |
| 0.0139 | 24.73 | 50 | 0.4183 | 0.8973 |
| 0.0139 | 25.73 | 52 | 0.4504 | 0.8904 |
| 0.0139 | 26.73 | 54 | 0.4368 | 0.8973 |
| 0.0139 | 27.73 | 56 | 0.4711 | 0.9110 |
| 0.0139 | 28.73 | 58 | 0.3928 | 0.9110 |
| 0.005 | 29.73 | 60 | 0.4550 | 0.8767 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Shenghao1993/distilbert-base-uncased-finetuned-clinc | e2d1274aa1a1d67e2c7792c637b01eab6ba2319f | 2022-07-08T08:22:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Shenghao1993 | null | Shenghao1993/distilbert-base-uncased-finetuned-clinc | 4 | null | transformers | 20,340 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7711
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2830 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8728 | 0.8410 |
| 3.785 | 3.0 | 954 | 1.1555 | 0.8913 |
| 1.6902 | 4.0 | 1272 | 0.8530 | 0.9126 |
| 0.901 | 5.0 | 1590 | 0.7711 | 0.9174 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anuj55/TSDAE-askubuntu2nli_stsb-finetuned-polifact | 49bafa53577533dc170382f05234263e11ba3acc | 2022-07-06T20:55:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | anuj55 | null | anuj55/TSDAE-askubuntu2nli_stsb-finetuned-polifact | 4 | null | transformers | 20,341 | Entry not found |
domenicrosati/deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing | b21d155bd76e9208524ab7ec3928dc08b8fd8dd9 | 2022-07-06T21:12:29.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing | 4 | null | transformers | 20,342 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
model-index:
- name: deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier_testing
This model is a fine-tuned version of [domenicrosati/deberta-v3-xsmall-finetuned-review_classifier](https://huggingface.co/domenicrosati/deberta-v3-xsmall-finetuned-review_classifier) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
samayl24/vit-base-beans-demo-v5 | fe177c8924af3d6ee8d8ee2fdb46391473979dda | 2022-07-21T19:00:19.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | samayl24 | null | samayl24/vit-base-beans-demo-v5 | 4 | null | transformers | 20,343 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans-demo-v5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1378 | 1.54 | 100 | 0.1444 | 0.9549 |
| 0.0334 | 3.08 | 200 | 0.0427 | 0.9925 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-becasv2-1 | 2bc0b0813b14524a46712627ce928c8c9d98799a | 2022-07-07T03:38:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becasv2-1 | 4 | null | transformers | 20,344 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 4.6722 |
| No log | 2.0 | 18 | 3.9450 |
| No log | 3.0 | 27 | 3.4890 |
| No log | 4.0 | 36 | 3.2251 |
| No log | 5.0 | 45 | 2.9906 |
| No log | 6.0 | 54 | 3.0790 |
| No log | 7.0 | 63 | 2.8791 |
| No log | 8.0 | 72 | 2.9654 |
| No log | 9.0 | 81 | 2.9460 |
| No log | 10.0 | 90 | 2.9472 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/distilbert-base-uncased-becasv2-5 | 335f62a76aeff9dc5b1f1ee9c89f5adc2083cb45 | 2022-07-07T04:25:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/distilbert-base-uncased-becasv2-5 | 4 | null | transformers | 20,345 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: distilbert-base-uncased-becasv2-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-becasv2-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 5.3475 |
| No log | 2.0 | 12 | 4.6045 |
| No log | 3.0 | 18 | 4.1832 |
| No log | 4.0 | 24 | 3.8223 |
| No log | 5.0 | 30 | 3.4798 |
| No log | 6.0 | 36 | 3.2615 |
| No log | 7.0 | 42 | 3.1414 |
| No log | 8.0 | 48 | 3.1067 |
| No log | 9.0 | 54 | 2.9950 |
| No log | 10.0 | 60 | 2.9482 |
| No log | 11.0 | 66 | 2.9536 |
| No log | 12.0 | 72 | 3.0180 |
| No log | 13.0 | 78 | 3.0515 |
| No log | 14.0 | 84 | 3.0444 |
| No log | 15.0 | 90 | 3.0409 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sebabrata/lmv2-g-w9-2018-148-doc-07-07_1 | 6dc567a0f2cf295aa7532911db2b559566ffb1a9 | 2022-07-07T08:52:38.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Sebabrata | null | Sebabrata/lmv2-g-w9-2018-148-doc-07-07_1 | 4 | null | transformers | 20,346 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-w9-2018-148-doc-07-07_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-w9-2018-148-doc-07-07_1
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0160
- Address Precision: 0.9667
- Address Recall: 0.9667
- Address F1: 0.9667
- Address Number: 30
- Business Name Precision: 1.0
- Business Name Recall: 1.0
- Business Name F1: 1.0
- Business Name Number: 29
- City State Zip Code Precision: 1.0
- City State Zip Code Recall: 1.0
- City State Zip Code F1: 1.0
- City State Zip Code Number: 30
- Ein Precision: 0.0
- Ein Recall: 0.0
- Ein F1: 0.0
- Ein Number: 1
- List Account Number Precision: 1.0
- List Account Number Recall: 1.0
- List Account Number F1: 1.0
- List Account Number Number: 11
- Name Precision: 1.0
- Name Recall: 1.0
- Name F1: 1.0
- Name Number: 30
- Ssn Precision: 0.8333
- Ssn Recall: 1.0
- Ssn F1: 0.9091
- Ssn Number: 10
- Overall Precision: 0.9789
- Overall Recall: 0.9858
- Overall F1: 0.9823
- Overall Accuracy: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address Precision | Address Recall | Address F1 | Address Number | Business Name Precision | Business Name Recall | Business Name F1 | Business Name Number | City State Zip Code Precision | City State Zip Code Recall | City State Zip Code F1 | City State Zip Code Number | Ein Precision | Ein Recall | Ein F1 | Ein Number | List Account Number Precision | List Account Number Recall | List Account Number F1 | List Account Number Number | Name Precision | Name Recall | Name F1 | Name Number | Ssn Precision | Ssn Recall | Ssn F1 | Ssn Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:-------------:|:----------:|:------:|:----------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.5672 | 1.0 | 118 | 1.1527 | 0.0 | 0.0 | 0.0 | 30 | 0.0 | 0.0 | 0.0 | 29 | 0.0 | 0.0 | 0.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 30 | 0.0 | 0.0 | 0.0 | 10 | 0.0 | 0.0 | 0.0 | 0.9642 |
| 0.8804 | 2.0 | 236 | 0.5661 | 0.2095 | 0.7333 | 0.3259 | 30 | 0.0 | 0.0 | 0.0 | 29 | 0.0 | 0.0 | 0.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 30 | 0.0 | 0.0 | 0.0 | 10 | 0.2095 | 0.1560 | 0.1789 | 0.9704 |
| 0.3739 | 3.0 | 354 | 0.2118 | 0.9375 | 1.0 | 0.9677 | 30 | 0.7143 | 0.1724 | 0.2778 | 29 | 0.9375 | 1.0 | 0.9677 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.8182 | 0.8182 | 0.8182 | 11 | 0.5 | 1.0 | 0.6667 | 30 | 0.75 | 0.9 | 0.8182 | 10 | 0.7338 | 0.8014 | 0.7661 | 0.9932 |
| 0.1626 | 4.0 | 472 | 0.1155 | 0.9375 | 1.0 | 0.9677 | 30 | 0.8710 | 0.9310 | 0.9 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.6923 | 0.8182 | 0.7500 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7 | 0.7 | 0.7 | 10 | 0.9110 | 0.9433 | 0.9268 | 0.9976 |
| 0.1031 | 5.0 | 590 | 0.0817 | 0.9355 | 0.9667 | 0.9508 | 30 | 0.8125 | 0.8966 | 0.8525 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.6923 | 0.8182 | 0.7500 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9048 | 0.9433 | 0.9236 | 0.9981 |
| 0.0769 | 6.0 | 708 | 0.0634 | 0.9355 | 0.9667 | 0.9508 | 30 | 0.9333 | 0.9655 | 0.9492 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.6923 | 0.8182 | 0.7500 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9310 | 0.9574 | 0.9441 | 0.9984 |
| 0.0614 | 7.0 | 826 | 0.0518 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.6923 | 0.8182 | 0.7500 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9510 | 0.9645 | 0.9577 | 0.9991 |
| 0.0509 | 8.0 | 944 | 0.0432 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.8333 | 0.9091 | 0.8696 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9648 | 0.9716 | 0.9682 | 0.9994 |
| 0.0431 | 9.0 | 1062 | 0.0369 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9787 | 0.9787 | 0.9787 | 0.9994 |
| 0.037 | 10.0 | 1180 | 0.0313 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9787 | 0.9787 | 0.9787 | 0.9994 |
| 0.0328 | 11.0 | 1298 | 0.0281 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7143 | 1.0 | 0.8333 | 10 | 0.9653 | 0.9858 | 0.9754 | 0.9994 |
| 0.0295 | 12.0 | 1416 | 0.0246 | 0.7429 | 0.8667 | 0.8 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.6667 | 0.8 | 0.7273 | 10 | 0.9116 | 0.9504 | 0.9306 | 0.9991 |
| 0.0251 | 13.0 | 1534 | 0.0207 | 0.9677 | 1.0 | 0.9836 | 30 | 0.9333 | 0.9655 | 0.9492 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9653 | 0.9858 | 0.9754 | 0.9994 |
| 0.0231 | 14.0 | 1652 | 0.0210 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 0.9655 | 0.9825 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9787 | 0.9787 | 0.9787 | 0.9991 |
| 0.0184 | 15.0 | 1770 | 0.0160 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9995 |
| 0.0162 | 16.0 | 1888 | 0.0142 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9995 |
| 0.0142 | 17.0 | 2006 | 0.0127 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9995 |
| 0.0123 | 18.0 | 2124 | 0.0114 | 0.9667 | 0.9667 | 0.9667 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9995 |
| 0.0118 | 19.0 | 2242 | 0.0152 | 0.9677 | 1.0 | 0.9836 | 30 | 0.6765 | 0.7931 | 0.7302 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 0.8333 | 0.9091 | 0.8696 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.8859 | 0.9362 | 0.9103 | 0.9986 |
| 0.0104 | 20.0 | 2360 | 0.0125 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 0.9655 | 0.9825 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.9091 | 1.0 | 0.9524 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9992 |
| 0.0092 | 21.0 | 2478 | 0.0113 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 0.9655 | 0.9825 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9653 | 0.9858 | 0.9754 | 0.9993 |
| 0.0089 | 22.0 | 2596 | 0.0111 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 0.9655 | 0.9825 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9789 | 0.9858 | 0.9823 | 0.9992 |
| 0.0076 | 23.0 | 2714 | 0.0107 | 0.9677 | 1.0 | 0.9836 | 30 | 0.9310 | 0.9310 | 0.9310 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8333 | 1.0 | 0.9091 | 10 | 0.9650 | 0.9787 | 0.9718 | 0.9991 |
| 0.0074 | 24.0 | 2832 | 0.0105 | 0.9677 | 1.0 | 0.9836 | 30 | 0.9310 | 0.9310 | 0.9310 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9514 | 0.9716 | 0.9614 | 0.9990 |
| 0.007 | 25.0 | 2950 | 0.0092 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 0.9655 | 0.9825 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7692 | 1.0 | 0.8696 | 10 | 0.9720 | 0.9858 | 0.9789 | 0.9991 |
| 0.0062 | 26.0 | 3068 | 0.0061 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7143 | 1.0 | 0.8333 | 10 | 0.9655 | 0.9929 | 0.9790 | 0.9994 |
| 0.0057 | 27.0 | 3186 | 0.0056 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.8182 | 0.9 | 0.8571 | 10 | 0.9720 | 0.9858 | 0.9789 | 0.9995 |
| 0.0047 | 28.0 | 3304 | 0.0054 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7143 | 1.0 | 0.8333 | 10 | 0.9655 | 0.9929 | 0.9790 | 0.9994 |
| 0.0042 | 29.0 | 3422 | 0.0052 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7143 | 1.0 | 0.8333 | 10 | 0.9655 | 0.9929 | 0.9790 | 0.9994 |
| 0.0039 | 30.0 | 3540 | 0.0049 | 0.9677 | 1.0 | 0.9836 | 30 | 1.0 | 1.0 | 1.0 | 29 | 1.0 | 1.0 | 1.0 | 30 | 0.0 | 0.0 | 0.0 | 1 | 1.0 | 1.0 | 1.0 | 11 | 1.0 | 1.0 | 1.0 | 30 | 0.7143 | 1.0 | 0.8333 | 10 | 0.9655 | 0.9929 | 0.9790 | 0.9994 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
annahaz/xlm-roberta-base-misogyny-en-out-of-sample-test | a7fe577c689f098cb959ce87e80969dda9ba35a4 | 2022-07-07T19:43:58.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | annahaz | null | annahaz/xlm-roberta-base-misogyny-en-out-of-sample-test | 4 | null | transformers | 20,347 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-en-out-of-sample-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-en-out-of-sample-test
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2143
- Accuracy: 0.5868
- F1: 0.5033
- Precision: 0.5570
- Recall: 0.4591
- Mae: 0.4132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.2704 | 1.0 | 1138 | 1.0169 | 0.5924 | 0.4022 | 0.6071 | 0.3007 | 0.4076 |
| 0.2552 | 2.0 | 2276 | 1.0994 | 0.5845 | 0.5141 | 0.5508 | 0.4820 | 0.4155 |
| 0.2082 | 3.0 | 3414 | 1.6637 | 0.5853 | 0.4815 | 0.5601 | 0.4222 | 0.4147 |
| 0.1824 | 4.0 | 4552 | 1.9495 | 0.5606 | 0.4482 | 0.5244 | 0.3914 | 0.4394 |
| 0.1645 | 5.0 | 5690 | 1.8441 | 0.5792 | 0.4997 | 0.5457 | 0.4608 | 0.4208 |
| 0.113 | 6.0 | 6828 | 2.3997 | 0.5928 | 0.4766 | 0.5758 | 0.4066 | 0.4072 |
| 0.0755 | 7.0 | 7966 | 2.9149 | 0.5633 | 0.5223 | 0.5211 | 0.5235 | 0.4367 |
| 0.0763 | 8.0 | 9104 | 2.8218 | 0.5762 | 0.5159 | 0.5384 | 0.4953 | 0.4238 |
| 0.0657 | 9.0 | 10242 | 2.9956 | 0.5903 | 0.5068 | 0.5619 | 0.4615 | 0.4097 |
| 0.0498 | 10.0 | 11380 | 3.2143 | 0.5868 | 0.5033 | 0.5570 | 0.4591 | 0.4132 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
PrimeQA/squad-v1-roberta-large | a44a009b18b23ce03ccaf505f6a6b2fe43c5a7a3 | 2022-07-07T20:27:51.000Z | [
"pytorch",
"roberta",
"English",
"arxiv:1606.05250",
"arxiv:1907.11692",
"transformers",
"MRC",
"SQuAD 1.1",
"roberta-large",
"license:apache-2.0"
] | null | false | PrimeQA | null | PrimeQA/squad-v1-roberta-large | 4 | null | transformers | 20,348 | ---
tags:
- MRC
- SQuAD 1.1
- roberta-large
language:
- English
license: apache-2.0
---
# Model description
An RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/).
The model is initialized with [roberta-large](https://huggingface.co/roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad).
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, roberta-large, that we used may be present in our fine-tuned model, squad-v1-roberta-large.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
```bibtex
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
```bibtex
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
tfshaman/distilbert-base-uncased-finetuned-clinc | 785d763a52d43c99582ea9116b2a7387defb2068 | 2022-07-07T22:15:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tfshaman | null | tfshaman/distilbert-base-uncased-finetuned-clinc | 4 | null | transformers | 20,349 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7786
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2838 | 1.0 | 318 | 3.2787 | 0.7455 |
| 2.622 | 2.0 | 636 | 1.8706 | 0.8332 |
| 1.5466 | 3.0 | 954 | 1.1623 | 0.8939 |
| 1.0135 | 4.0 | 1272 | 0.8619 | 0.91 |
| 0.7985 | 5.0 | 1590 | 0.7786 | 0.9158 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Shenghao1993/distilbert-base-uncased-distilled-clinc | 71e8784b97731dac7e7799031eb19966f5f3e608 | 2022-07-08T09:49:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Shenghao1993 | null | Shenghao1993/distilbert-base-uncased-distilled-clinc | 4 | null | transformers | 20,350 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9454838709677419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3120
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.8803 | 0.7426 |
| 2.2488 | 2.0 | 636 | 0.9662 | 0.8626 |
| 2.2488 | 3.0 | 954 | 0.5640 | 0.9103 |
| 0.8679 | 4.0 | 1272 | 0.4093 | 0.9332 |
| 0.4101 | 5.0 | 1590 | 0.3554 | 0.9435 |
| 0.4101 | 6.0 | 1908 | 0.3312 | 0.9445 |
| 0.2894 | 7.0 | 2226 | 0.3179 | 0.9452 |
| 0.2496 | 8.0 | 2544 | 0.3137 | 0.9448 |
| 0.2496 | 9.0 | 2862 | 0.3120 | 0.9455 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_xsum_42 | ad14cda7663dcc428f507cb50794567ed72a3fd1 | 2022-07-08T04:45:49.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_xsum_42 | 4 | null | transformers | 20,351 | Entry not found |
jonatasgrosman/exp_w2v2t_en_unispeech_s809 | 7f0dea29fc32e02758f9427c0020d6fb2b16a195 | 2022-07-08T05:41:57.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_en_unispeech_s809 | 4 | null | transformers | 20,352 | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_en_unispeech_s809
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Nonzerophilip/bert-finetuned-ner_swedish_small_set_health_and_prices | 3bb826b29ee2c3cfb2342f89a4f4f337dd610668 | 2022-07-08T14:01:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Nonzerophilip | null | Nonzerophilip/bert-finetuned-ner_swedish_small_set_health_and_prices | 4 | null | transformers | 20,353 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_small_set_health_and_prices
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_small_set_health_and_prices
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0942
- Precision: 0.7709
- Recall: 0.8118
- F1: 0.7908
- Accuracy: 0.9741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 250 | 0.1310 | 0.6116 | 0.7471 | 0.6726 | 0.9578 |
| 0.1583 | 2.0 | 500 | 0.0939 | 0.7560 | 0.8020 | 0.7783 | 0.9737 |
| 0.1583 | 3.0 | 750 | 0.0942 | 0.7709 | 0.8118 | 0.7908 | 0.9741 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-xsmall-with-biblio-context-frozenlm-finetuned-review_classifier | 30bde22b5fb71c1362ccf4c7d919e0db0dfe60e2 | 2022-07-08T13:26:07.000Z | [
"pytorch",
"deberta-v2",
"transformers",
"text-classification",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-xsmall-with-biblio-context-frozenlm-finetuned-review_classifier | 4 | null | transformers | 20,354 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: deberta-v3-xsmall-with-biblio-context-frozenlm-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-with-biblio-context-frozenlm-finetuned-review_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3109
- Accuracy: 0.9066
- F1: 0.0090
- Recall: 0.0045
- Precision: 0.8293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.2938 | 1.0 | 6667 | 0.3103 | 0.9070 | 0.0221 | 0.0112 | 0.7636 |
| 0.2851 | 2.0 | 13334 | 0.3109 | 0.9066 | 0.0090 | 0.0045 | 0.8293 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tfshaman/distilbert-base-uncased-distilled-clinc | b514a4c1ff8873279de3677ec99c97efb82fa8ed | 2022-07-08T15:19:17.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:clinc_oos",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | tfshaman | null | tfshaman/distilbert-base-uncased-distilled-clinc | 4 | null | transformers | 20,355 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.8264516129032258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5565
- Accuracy: 0.8265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.2743 | 1.0 | 318 | 2.5809 | 0.7310 |
| 2.2148 | 2.0 | 636 | 1.7909 | 0.8071 |
| 1.7065 | 3.0 | 954 | 1.5565 | 0.8265 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/SPECTER-finetuned-review_classifier | bd88fb5048b43166f23ba4d889e5d44f64622664 | 2022-07-08T20:32:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/SPECTER-finetuned-review_classifier | 4 | null | transformers | 20,356 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: SPECTER-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-finetuned-review_classifier
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
- Accuracy: 0.9801
- F1: 0.8964
- Recall: 0.8814
- Precision: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.2013 | 1.0 | 1667 | 0.1327 | 0.9592 | 0.7827 | 0.7546 | 0.8131 |
| 0.1227 | 2.0 | 3334 | 0.0645 | 0.9801 | 0.8964 | 0.8814 | 0.9118 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
domenicrosati/SPECTER-frozen-with-biblio-context-finetuned-review_classifier | 8e8ce04b5dcedc3aab7d569bf71c857ef51160df | 2022-07-08T20:05:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"transformers",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | domenicrosati | null | domenicrosati/SPECTER-frozen-with-biblio-context-finetuned-review_classifier | 4 | null | transformers | 20,357 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
model-index:
- name: SPECTER-frozen-with-biblio-context-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-frozen-with-biblio-context-finetuned-review_classifier
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2606
- eval_accuracy: 0.91
- eval_f1: 0.0925
- eval_recall: 0.0490
- eval_precision: 0.8379
- eval_runtime: 1030.2818
- eval_samples_per_second: 77.649
- eval_steps_per_second: 6.471
- epoch: 1.0
- step: 6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test | 2e2ed71b461ca233cfa9c7cd298a1d47bd7f96d4 | 2022-07-08T19:38:13.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | annahaz | null | annahaz/xlm-roberta-base-misogyny-sexism-out-of-sample-test | 4 | null | transformers | 20,358 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-misogyny-sexism-out-of-sample-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-out-of-sample-test
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4319
- Accuracy: 0.6329
- F1: 0.5384
- Precision: 0.6311
- Recall: 0.4694
- Mae: 0.3671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3447 | 1.0 | 2157 | 0.8407 | 0.6264 | 0.4817 | 0.6555 | 0.3808 | 0.3736 |
| 0.3105 | 2.0 | 4314 | 0.9660 | 0.6244 | 0.4840 | 0.6480 | 0.3863 | 0.3756 |
| 0.3036 | 3.0 | 6471 | 1.0797 | 0.6218 | 0.5499 | 0.6014 | 0.5065 | 0.3782 |
| 0.2643 | 4.0 | 8628 | 1.6355 | 0.6301 | 0.4790 | 0.6696 | 0.3728 | 0.3699 |
| 0.2591 | 5.0 | 10785 | 1.4902 | 0.6173 | 0.5308 | 0.6020 | 0.4747 | 0.3827 |
| 0.2052 | 6.0 | 12942 | 1.6884 | 0.6236 | 0.5166 | 0.6235 | 0.4410 | 0.3764 |
| 0.2017 | 7.0 | 15099 | 2.1026 | 0.6323 | 0.5341 | 0.6325 | 0.4622 | 0.3677 |
| 0.1715 | 8.0 | 17256 | 2.3440 | 0.6292 | 0.5381 | 0.6229 | 0.4736 | 0.3708 |
| 0.1543 | 9.0 | 19413 | 2.2136 | 0.6301 | 0.5411 | 0.6230 | 0.4783 | 0.3699 |
| 0.1456 | 10.0 | 21570 | 2.4319 | 0.6329 | 0.5384 | 0.6311 | 0.4694 | 0.3671 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_fr_unispeech_s514 | 100e5382f6c5ec1367f51f4b378195b0182a9f8c | 2022-07-08T23:35:45.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech_s514 | 4 | null | transformers | 20,359 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech_s514
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech_s833 | 4a27be2efda86fd9a132925ed8b4eef7789725f0 | 2022-07-08T23:39:06.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech_s833 | 4 | null | transformers | 20,360 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech_s833
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech_s42 | 3daaaa04711b524d18629a7cb925d71716b0bc80 | 2022-07-08T23:42:55.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech_s42 | 4 | null | transformers | 20,361 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech_s42
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_hubert_s767 | 048be44ce6c4c9ef8d96833b577fe86440255e2d | 2022-07-08T23:46:51.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_hubert_s767 | 4 | null | transformers | 20,362 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_hubert_s767
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_hubert_s990 | 67d835676c87f2d3ca237aee0fe804a040092365 | 2022-07-08T23:52:45.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_hubert_s990 | 4 | null | transformers | 20,363 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_hubert_s990
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_hubert_s461 | e5a7f95830a886d3e1022d347728b99d127ed025 | 2022-07-08T23:58:26.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_hubert_s461 | 4 | null | transformers | 20,364 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_hubert_s461
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_wavlm_s929 | db1218fe40db653c97ffa5aa9af0cc47264c02c8 | 2022-07-09T00:30:03.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_wavlm_s929 | 4 | null | transformers | 20,365 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wavlm_s929
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Bistolero/en_de_64_25k | 8ec7a33eb9660e0784677983833de4cab6727a75 | 2022-07-09T00:53:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Bistolero | null | Bistolero/en_de_64_25k | 4 | null | transformers | 20,366 | Entry not found |
jonatasgrosman/exp_w2v2t_fr_wavlm_s208 | 16ddc999cd5cbc1c680eb661ab6398abd630ffec | 2022-07-09T00:45:25.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_wavlm_s208 | 4 | null | transformers | 20,367 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_wavlm_s208
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s51 | d6be0927b56ffededf869074612b7f5bbdfa2347 | 2022-07-09T00:49:16.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s51 | 4 | null | transformers | 20,368 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech-ml_s51
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s159 | 835d3c449f6bad7592eddc31c61adcd3edeaca53 | 2022-07-09T00:53:17.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s159 | 4 | null | transformers | 20,369 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech-ml_s159
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s614 | cc198f6eb4fd1632c42df2c77dd6d471e34b15a9 | 2022-07-09T00:57:02.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_unispeech-ml_s614 | 4 | null | transformers | 20,370 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_unispeech-ml_s614
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fr_vp-fr_s320 | 8f6ead2b96238c3725216ce09965a782a6e53989 | 2022-07-09T01:00:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fr_vp-fr_s320 | 4 | null | transformers | 20,371 | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-fr_s320
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Sebabrata/lmv2-g-w9-293-doc-07-09 | 2222d36e2ff1b394e7b80b5bf8d486b590a81e93 | 2022-07-09T19:02:58.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Sebabrata | null | Sebabrata/lmv2-g-w9-293-doc-07-09 | 4 | null | transformers | 20,372 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-w9-293-doc-07-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-w9-293-doc-07-09
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0031
- Address Precision: 1.0
- Address Recall: 1.0
- Address F1: 1.0
- Address Number: 59
- Business Name Precision: 0.9737
- Business Name Recall: 0.9737
- Business Name F1: 0.9737
- Business Name Number: 38
- City State Zip Code Precision: 1.0
- City State Zip Code Recall: 1.0
- City State Zip Code F1: 1.0
- City State Zip Code Number: 59
- Ein Precision: 0.9474
- Ein Recall: 0.9
- Ein F1: 0.9231
- Ein Number: 20
- List Account Number Precision: 1.0
- List Account Number Recall: 1.0
- List Account Number F1: 1.0
- List Account Number Number: 59
- Name Precision: 1.0
- Name Recall: 1.0
- Name F1: 1.0
- Name Number: 59
- Ssn Precision: 0.9268
- Ssn Recall: 0.9744
- Ssn F1: 0.9500
- Ssn Number: 39
- Overall Precision: 0.9850
- Overall Recall: 0.9880
- Overall F1: 0.9865
- Overall Accuracy: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Address Precision | Address Recall | Address F1 | Address Number | Business Name Precision | Business Name Recall | Business Name F1 | Business Name Number | City State Zip Code Precision | City State Zip Code Recall | City State Zip Code F1 | City State Zip Code Number | Ein Precision | Ein Recall | Ein F1 | Ein Number | List Account Number Precision | List Account Number Recall | List Account Number F1 | List Account Number Number | Name Precision | Name Recall | Name F1 | Name Number | Ssn Precision | Ssn Recall | Ssn F1 | Ssn Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:-------------:|:----------:|:------:|:----------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.3523 | 1.0 | 234 | 0.7065 | 0.0 | 0.0 | 0.0 | 59 | 0.0 | 0.0 | 0.0 | 38 | 0.0 | 0.0 | 0.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.0 | 0.0 | 0.0 | 59 | 0.0 | 0.0 | 0.0 | 59 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 0.9513 |
| 0.3676 | 2.0 | 468 | 0.1605 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.9091 | 0.7895 | 0.8451 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.6667 | 0.8475 | 0.7463 | 59 | 0.9077 | 1.0 | 0.9516 | 59 | 0.0 | 0.0 | 0.0 | 39 | 0.8767 | 0.7688 | 0.8192 | 0.9901 |
| 0.1217 | 3.0 | 702 | 0.0852 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.9722 | 0.9211 | 0.9459 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.7246 | 0.8475 | 0.7812 | 59 | 0.9833 | 1.0 | 0.9916 | 59 | 0.5574 | 0.8718 | 0.6800 | 39 | 0.8551 | 0.8859 | 0.8702 | 0.9953 |
| 0.0783 | 4.0 | 936 | 0.0590 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.9355 | 0.9831 | 0.9587 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.5161 | 0.8205 | 0.6337 | 39 | 0.8968 | 0.9129 | 0.9048 | 0.9959 |
| 0.0548 | 5.0 | 1170 | 0.0432 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.55 | 0.8462 | 0.6667 | 39 | 0.9104 | 0.9159 | 0.9132 | 0.9963 |
| 0.0405 | 6.0 | 1404 | 0.0333 | 1.0 | 1.0 | 1.0 | 59 | 0.925 | 0.9737 | 0.9487 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.0 | 0.0 | 0.0 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.6066 | 0.9487 | 0.74 | 39 | 0.9142 | 0.9279 | 0.9210 | 0.9965 |
| 0.0328 | 7.0 | 1638 | 0.0278 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 0.9833 | 1.0 | 0.9916 | 59 | 0.0 | 0.0 | 0.0 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.5441 | 0.9487 | 0.6916 | 39 | 0.8983 | 0.9279 | 0.9129 | 0.9959 |
| 0.0245 | 8.0 | 1872 | 0.0212 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.1538 | 0.1 | 0.1212 | 20 | 0.9672 | 1.0 | 0.9833 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.5862 | 0.8718 | 0.7010 | 39 | 0.8905 | 0.9279 | 0.9088 | 0.9969 |
| 0.0192 | 9.0 | 2106 | 0.0164 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.56 | 0.7 | 0.6222 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.7111 | 0.8205 | 0.7619 | 39 | 0.9273 | 0.9580 | 0.9424 | 0.9983 |
| 0.0145 | 10.0 | 2340 | 0.0127 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8235 | 0.7 | 0.7568 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.7391 | 0.8718 | 0.8000 | 39 | 0.9525 | 0.9640 | 0.9582 | 0.9989 |
| 0.0116 | 11.0 | 2574 | 0.0103 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8571 | 0.9 | 0.8780 | 20 | 0.9672 | 1.0 | 0.9833 | 59 | 1.0 | 0.9661 | 0.9828 | 59 | 0.8537 | 0.8974 | 0.875 | 39 | 0.9643 | 0.9730 | 0.9686 | 0.9992 |
| 0.0099 | 12.0 | 2808 | 0.0095 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9 | 0.9 | 0.9 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.8537 | 0.8974 | 0.875 | 39 | 0.9731 | 0.9790 | 0.9760 | 0.9992 |
| 0.0083 | 13.0 | 3042 | 0.0083 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.9231 | 0.9474 | 0.9351 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8095 | 0.85 | 0.8293 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.9667 | 0.9831 | 0.9748 | 59 | 0.875 | 0.8974 | 0.8861 | 39 | 0.9469 | 0.9640 | 0.9554 | 0.9990 |
| 0.0096 | 14.0 | 3276 | 0.0066 | 1.0 | 1.0 | 1.0 | 59 | 0.9231 | 0.9474 | 0.9351 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8571 | 0.9 | 0.8780 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.9024 | 0.9487 | 0.9250 | 39 | 0.9703 | 0.9820 | 0.9761 | 0.9993 |
| 0.0116 | 15.0 | 3510 | 0.0060 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9048 | 0.95 | 0.9268 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.8810 | 0.9487 | 0.9136 | 39 | 0.9704 | 0.9850 | 0.9776 | 0.9992 |
| 0.0064 | 16.0 | 3744 | 0.0045 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8 | 0.8 | 0.8000 | 20 | 0.9833 | 1.0 | 0.9916 | 59 | 1.0 | 0.9831 | 0.9915 | 59 | 0.8837 | 0.9744 | 0.9268 | 39 | 0.9674 | 0.9790 | 0.9731 | 0.9995 |
| 0.0039 | 17.0 | 3978 | 0.0068 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 0.9 | 0.9474 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 0.9661 | 0.9828 | 59 | 0.825 | 0.8462 | 0.8354 | 39 | 0.9698 | 0.9640 | 0.9669 | 0.9991 |
| 0.0036 | 18.0 | 4212 | 0.0098 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.5714 | 0.6 | 0.5854 | 20 | 0.9831 | 0.9831 | 0.9831 | 59 | 1.0 | 0.9831 | 0.9915 | 59 | 0.5424 | 0.8205 | 0.6531 | 39 | 0.8924 | 0.9459 | 0.9184 | 0.9981 |
| 0.0037 | 19.0 | 4446 | 0.0054 | 1.0 | 1.0 | 1.0 | 59 | 0.925 | 0.9737 | 0.9487 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9048 | 0.95 | 0.9268 | 20 | 0.9672 | 1.0 | 0.9833 | 59 | 0.9821 | 0.9322 | 0.9565 | 59 | 0.9231 | 0.9231 | 0.9231 | 39 | 0.9672 | 0.9730 | 0.9701 | 0.9991 |
| 0.0033 | 20.0 | 4680 | 0.0043 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8182 | 0.9 | 0.8571 | 20 | 0.9672 | 1.0 | 0.9833 | 59 | 1.0 | 0.9661 | 0.9828 | 59 | 0.8810 | 0.9487 | 0.9136 | 39 | 0.9645 | 0.9790 | 0.9717 | 0.9992 |
| 0.0022 | 21.0 | 4914 | 0.0031 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8571 | 0.9 | 0.8780 | 20 | 0.9833 | 1.0 | 0.9916 | 59 | 1.0 | 0.9831 | 0.9915 | 59 | 0.9048 | 0.9744 | 0.9383 | 39 | 0.9733 | 0.9850 | 0.9791 | 0.9995 |
| 0.0026 | 22.0 | 5148 | 0.0039 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 0.85 | 0.9189 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.8444 | 0.9744 | 0.9048 | 39 | 0.9762 | 0.9850 | 0.9806 | 0.9994 |
| 0.0018 | 23.0 | 5382 | 0.0026 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8947 | 0.85 | 0.8718 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.9268 | 0.9744 | 0.9500 | 39 | 0.9820 | 0.9850 | 0.9835 | 0.9996 |
| 0.002 | 24.0 | 5616 | 0.0032 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8571 | 0.9 | 0.8780 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.8605 | 0.9487 | 0.9024 | 39 | 0.9704 | 0.9850 | 0.9776 | 0.9995 |
| 0.0026 | 25.0 | 5850 | 0.0033 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9048 | 0.95 | 0.9268 | 20 | 0.9672 | 1.0 | 0.9833 | 59 | 1.0 | 0.9661 | 0.9828 | 59 | 0.9048 | 0.9744 | 0.9383 | 39 | 0.9733 | 0.9850 | 0.9791 | 0.9994 |
| 0.0015 | 26.0 | 6084 | 0.0025 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.95 | 0.95 | 0.9500 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 0.9831 | 0.9915 | 59 | 0.95 | 0.9744 | 0.9620 | 39 | 0.9820 | 0.9850 | 0.9835 | 0.9996 |
| 0.0022 | 27.0 | 6318 | 0.0029 | 1.0 | 1.0 | 1.0 | 59 | 0.9024 | 0.9737 | 0.9367 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.8571 | 0.9 | 0.8780 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.9048 | 0.9744 | 0.9383 | 39 | 0.9676 | 0.9880 | 0.9777 | 0.9995 |
| 0.0012 | 28.0 | 6552 | 0.0031 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9474 | 0.9 | 0.9231 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.9268 | 0.9744 | 0.9500 | 39 | 0.9850 | 0.9880 | 0.9865 | 0.9995 |
| 0.001 | 29.0 | 6786 | 0.0029 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.9444 | 0.85 | 0.8947 | 20 | 1.0 | 1.0 | 1.0 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.9048 | 0.9744 | 0.9383 | 39 | 0.9820 | 0.9850 | 0.9835 | 0.9995 |
| 0.0029 | 30.0 | 7020 | 0.0033 | 1.0 | 1.0 | 1.0 | 59 | 0.9737 | 0.9737 | 0.9737 | 38 | 1.0 | 1.0 | 1.0 | 59 | 0.95 | 0.95 | 0.9500 | 20 | 0.9667 | 0.9831 | 0.9748 | 59 | 1.0 | 1.0 | 1.0 | 59 | 0.95 | 0.9744 | 0.9620 | 39 | 0.9821 | 0.9880 | 0.9850 | 0.9995 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/dagsen | e6434008c0b7bcf6d4082aec5717fe2c64564f0e | 2022-07-30T01:37:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/dagsen | 4 | null | transformers | 20,373 | ---
language: en
thumbnail: http://www.huggingtweets.com/dagsen/1659145030711/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1523836196425650176/LhtBL1Vb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dagsen</div>
<div style="text-align: center; font-size: 14px;">@dagsen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dagsen.
| Data | dagsen |
| --- | --- |
| Tweets downloaded | 192 |
| Retweets | 20 |
| Short tweets | 12 |
| Tweets kept | 160 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g1bf2no/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dagsen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hm84m5e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hm84m5e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dagsen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonatasgrosman/exp_w2v2t_fa_wavlm_s527 | a9dd7edea869bf01577169a4749f992d56bd3f16 | 2022-07-09T22:44:19.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fa_wavlm_s527 | 4 | null | transformers | 20,374 | ---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_wavlm_s527
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_fa_unispeech-sat_s803 | 7f2687bcd761c4259dbde9ddef5d604bb92c1cf2 | 2022-07-09T23:30:53.000Z | [
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"fa",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_fa_unispeech-sat_s803 | 4 | null | transformers | 20,375 | ---
language:
- fa
license: apache-2.0
tags:
- automatic-speech-recognition
- fa
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fa_unispeech-sat_s803
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
alanwang8/longformer-sparse | 9819ebd5da2ba6d1fc580060578241c1f9186248 | 2022-07-10T04:54:37.000Z | [
"pytorch",
"longformer",
"text-classification",
"transformers"
] | text-classification | false | alanwang8 | null | alanwang8/longformer-sparse | 4 | null | transformers | 20,376 | Entry not found |
hirohiroz/wav2vec2-base-timit-demo-google-colab | 1945f2aeae3a63064b30731f7b0f72035482408d | 2022-07-10T16:28:09.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | hirohiroz | null | hirohiroz/wav2vec2-base-timit-demo-google-colab | 4 | null | transformers | 20,377 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5173
- Wer: 0.3399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5684 | 1.0 | 500 | 2.1662 | 1.0068 |
| 0.9143 | 2.01 | 1000 | 0.5820 | 0.5399 |
| 0.439 | 3.01 | 1500 | 0.4596 | 0.4586 |
| 0.3122 | 4.02 | 2000 | 0.4623 | 0.4181 |
| 0.2391 | 5.02 | 2500 | 0.4243 | 0.3938 |
| 0.1977 | 6.02 | 3000 | 0.4421 | 0.3964 |
| 0.1635 | 7.03 | 3500 | 0.5076 | 0.3977 |
| 0.145 | 8.03 | 4000 | 0.4639 | 0.3754 |
| 0.1315 | 9.04 | 4500 | 0.5181 | 0.3652 |
| 0.1131 | 10.04 | 5000 | 0.4496 | 0.3778 |
| 0.1005 | 11.04 | 5500 | 0.4438 | 0.3664 |
| 0.0919 | 12.05 | 6000 | 0.4868 | 0.3865 |
| 0.0934 | 13.05 | 6500 | 0.5163 | 0.3694 |
| 0.076 | 14.06 | 7000 | 0.4543 | 0.3719 |
| 0.0727 | 15.06 | 7500 | 0.5296 | 0.3807 |
| 0.0657 | 16.06 | 8000 | 0.4715 | 0.3699 |
| 0.0578 | 17.07 | 8500 | 0.4927 | 0.3699 |
| 0.057 | 18.07 | 9000 | 0.4767 | 0.3660 |
| 0.0493 | 19.08 | 9500 | 0.5306 | 0.3623 |
| 0.0425 | 20.08 | 10000 | 0.4828 | 0.3561 |
| 0.0431 | 21.08 | 10500 | 0.4875 | 0.3620 |
| 0.0366 | 22.09 | 11000 | 0.4984 | 0.3482 |
| 0.0332 | 23.09 | 11500 | 0.5375 | 0.3477 |
| 0.0348 | 24.1 | 12000 | 0.5406 | 0.3361 |
| 0.0301 | 25.1 | 12500 | 0.4954 | 0.3381 |
| 0.0294 | 26.1 | 13000 | 0.5033 | 0.3424 |
| 0.026 | 27.11 | 13500 | 0.5254 | 0.3384 |
| 0.0243 | 28.11 | 14000 | 0.5189 | 0.3402 |
| 0.0221 | 29.12 | 14500 | 0.5173 | 0.3399 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Manishkalra/discourse_classification_using_robrta_base | 9f98902b1b04d21264c798e8580a15c4515b4fed | 2022-07-10T12:41:59.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Manishkalra | null | Manishkalra/discourse_classification_using_robrta_base | 4 | null | transformers | 20,378 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: discourse_classification_using_robrta_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discourse_classification_using_robrta_base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0832
- Accuracy: 0.6592
- F1: 0.6592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tner/bert-base-tweetner-2020-2021-continuous | fcd0a53c02826322f067beb004dc88405adb5a5b | 2022-07-11T22:21:27.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/bert-base-tweetner-2020-2021-continuous | 4 | null | transformers | 20,379 | Entry not found |
malinoori/wav2vec2-base-2 | c1f7bceb0e769d14cf85584f7f4cecc652afd1f9 | 2022-07-10T22:33:08.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | malinoori | null | malinoori/wav2vec2-base-2 | 4 | null | transformers | 20,380 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5953
- eval_wer: 0.3621
- eval_runtime: 54.4895
- eval_samples_per_second: 30.832
- eval_steps_per_second: 3.854
- epoch: 22.61
- step: 22500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
malinoori/wav2vec2-base-superb-demo-google-colab | c7ee22140b6da20b7d6a9b90ad3c33badf58b5d5 | 2022-07-10T22:23:31.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | malinoori | null | malinoori/wav2vec2-base-superb-demo-google-colab | 4 | null | transformers | 20,381 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-superb-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-superb-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3795
- eval_wer: 0.3148
- eval_runtime: 26.4914
- eval_samples_per_second: 10.23
- eval_steps_per_second: 1.283
- epoch: 2.47
- step: 1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_et_xlsr-53_s952 | 166e0ea4c7ae9c9a29f0c827e8270a8030f521c5 | 2022-07-10T22:14:42.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"et",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_et_xlsr-53_s952 | 4 | null | transformers | 20,382 | ---
language:
- et
license: apache-2.0
tags:
- automatic-speech-recognition
- et
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_et_xlsr-53_s952
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
NimaBoscarino/STPushToHub-test | dfda97f2448d4048a9ffe8bd4c7ce8b4b701720c | 2022-07-10T22:48:43.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | NimaBoscarino | null | NimaBoscarino/STPushToHub-test | 4 | null | sentence-transformers | 20,383 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# NimaBoscarino/STPushToHub-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('NimaBoscarino/STPushToHub-test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/STPushToHub-test')
model = AutoModel.from_pretrained('NimaBoscarino/STPushToHub-test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/STPushToHub-test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
aws-ai/dse-roberta-base | bdb4005e439cd26d3736a7a45f56737d7f7cd47c | 2022-07-11T05:47:10.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | aws-ai | null | aws-ai/dse-roberta-base | 4 | null | transformers | 20,384 | Entry not found |
jonatasgrosman/exp_w2v2t_ru_wav2vec2_s904 | 8ae16fdbe38113bbbf40be608c26ce18a0a270f8 | 2022-07-11T07:32:24.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ru_wav2vec2_s904 | 4 | null | transformers | 20,385 | ---
language:
- ru
license: apache-2.0
tags:
- automatic-speech-recognition
- ru
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ru_wav2vec2_s904
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ru_xlsr-53_s911 | 8f1ece1c12eb4ac023aaf249eb3350cf6a4cdb76 | 2022-07-11T07:52:25.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ru_xlsr-53_s911 | 4 | null | transformers | 20,386 | ---
language:
- ru
license: apache-2.0
tags:
- automatic-speech-recognition
- ru
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ru_xlsr-53_s911
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ru_no-pretraining_s895 | 7d46cf3d715507ed47c17c62af0382109f53cce2 | 2022-07-11T08:30:17.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ru_no-pretraining_s895 | 4 | null | transformers | 20,387 | ---
language:
- ru
license: apache-2.0
tags:
- automatic-speech-recognition
- ru
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ru_no-pretraining_s895
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_ru_unispeech-ml_s947 | 85a58a08cfd7fc25fb81129062e28d19b8c4ce15 | 2022-07-11T08:45:37.000Z | [
"pytorch",
"unispeech",
"automatic-speech-recognition",
"ru",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ru_unispeech-ml_s947 | 4 | null | transformers | 20,388 | ---
language:
- ru
license: apache-2.0
tags:
- automatic-speech-recognition
- ru
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ru_unispeech-ml_s947
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
paola-md/recipe-roberta-upper-Is | e85b20ce102f7d313648bdb82fcda6a22e759e90 | 2022-07-11T12:57:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | paola-md | null | paola-md/recipe-roberta-upper-Is | 4 | null | transformers | 20,389 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: recipe-roberta-upper-Is
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-roberta-upper-Is
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2455 | 1.0 | 1228 | 1.0420 |
| 1.0812 | 2.0 | 2456 | 0.9641 |
| 1.018 | 3.0 | 3684 | 0.9220 |
| 0.977 | 4.0 | 4912 | 0.8943 |
| 0.9451 | 5.0 | 6140 | 0.8726 |
| 0.9254 | 6.0 | 7368 | 0.8574 |
| 0.9074 | 7.0 | 8596 | 0.8404 |
| 0.8944 | 8.0 | 9824 | 0.8290 |
| 0.8797 | 9.0 | 11052 | 0.8258 |
| 0.869 | 10.0 | 12280 | 0.8115 |
| 0.8609 | 11.0 | 13508 | 0.8085 |
| 0.8522 | 12.0 | 14736 | 0.7995 |
| 0.8462 | 13.0 | 15964 | 0.7958 |
| 0.8414 | 14.0 | 17192 | 0.7891 |
| 0.8374 | 15.0 | 18420 | 0.7856 |
| 0.8327 | 16.0 | 19648 | 0.7850 |
| 0.8268 | 17.0 | 20876 | 0.7784 |
| 0.8256 | 18.0 | 22104 | 0.7802 |
| 0.822 | 19.0 | 23332 | 0.7789 |
| 0.8219 | 20.0 | 24560 | 0.7757 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SkolkovoInstitute/t5-informal | 589e8a9f9768fa730f907b96bd6670ed85ec15f0 | 2022-07-11T12:32:15.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:GYAFC",
"transformers",
"formality transfer",
"text style transfer",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | SkolkovoInstitute | null | SkolkovoInstitute/t5-informal | 4 | null | transformers | 20,390 | ---
language: en
tags:
- t5
- formality transfer
- text style transfer
datasets:
- GYAFC
license: apache-2.0
---
This is [T5-base Parapharasing model](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis) fine-tuned on [GYAFC formality dataset](https://aclanthology.org/N18-1012/) in __from formal to informal direction__. So you may use this model to make your English text more informal. |
tner/twitter-roberta-base-dec2020-tweetner-random | 463fd8278d8c0f1cc843c7990e885150c8d223c0 | 2022-07-11T18:49:36.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/twitter-roberta-base-dec2020-tweetner-random | 4 | null | transformers | 20,391 | Entry not found |
asahi417/lmqg-mt5_base-esquad | 40392e83e7fbb48b54e95a36e81a3717c17f9d9d | 2022-07-11T22:15:14.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5_base-esquad | 4 | null | transformers | 20,392 | Entry not found |
Evelyn18/legalectra-small-spanish-becasv3-3 | f617a01b40c1836bd5cf3df47234fde8e0feea88 | 2022-07-12T04:30:27.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/legalectra-small-spanish-becasv3-3 | 4 | null | transformers | 20,393 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-3
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.7608 |
| No log | 2.0 | 10 | 5.5991 |
| No log | 3.0 | 15 | 5.5162 |
| No log | 4.0 | 20 | 5.4370 |
| No log | 5.0 | 25 | 5.3521 |
| No log | 6.0 | 30 | 5.2657 |
| No log | 7.0 | 35 | 5.1771 |
| No log | 8.0 | 40 | 5.1024 |
| No log | 9.0 | 45 | 5.0248 |
| No log | 10.0 | 50 | 4.9609 |
| No log | 11.0 | 55 | 4.9167 |
| No log | 12.0 | 60 | 4.8487 |
| No log | 13.0 | 65 | 4.8175 |
| No log | 14.0 | 70 | 4.7646 |
| No log | 15.0 | 75 | 4.7276 |
| No log | 16.0 | 80 | 4.7003 |
| No log | 17.0 | 85 | 4.6518 |
| No log | 18.0 | 90 | 4.6240 |
| No log | 19.0 | 95 | 4.6033 |
| No log | 20.0 | 100 | 4.5601 |
| No log | 21.0 | 105 | 4.5433 |
| No log | 22.0 | 110 | 4.5279 |
| No log | 23.0 | 115 | 4.4981 |
| No log | 24.0 | 120 | 4.4831 |
| No log | 25.0 | 125 | 4.4745 |
| No log | 26.0 | 130 | 4.4607 |
| No log | 27.0 | 135 | 4.4528 |
| No log | 28.0 | 140 | 4.4348 |
| No log | 29.0 | 145 | 4.4418 |
| No log | 30.0 | 150 | 4.4380 |
| No log | 31.0 | 155 | 4.4205 |
| No log | 32.0 | 160 | 4.4373 |
| No log | 33.0 | 165 | 4.4302 |
| No log | 34.0 | 170 | 4.4468 |
| No log | 35.0 | 175 | 4.4512 |
| No log | 36.0 | 180 | 4.4225 |
| No log | 37.0 | 185 | 4.4303 |
| No log | 38.0 | 190 | 4.4562 |
| No log | 39.0 | 195 | 4.4671 |
| No log | 40.0 | 200 | 4.4869 |
| No log | 41.0 | 205 | 4.5046 |
| No log | 42.0 | 210 | 4.4990 |
| No log | 43.0 | 215 | 4.4847 |
| No log | 44.0 | 220 | 4.4770 |
| No log | 45.0 | 225 | 4.4786 |
| No log | 46.0 | 230 | 4.4741 |
| No log | 47.0 | 235 | 4.4797 |
| No log | 48.0 | 240 | 4.4830 |
| No log | 49.0 | 245 | 4.4845 |
| No log | 50.0 | 250 | 4.4873 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-4 | 795762f3cd1e310252d59a3f9985c5a60b41a42e | 2022-07-12T04:38:19.000Z | [
"pytorch",
"tensorboard",
"electra",
"question-answering",
"dataset:becasv2",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | Evelyn18 | null | Evelyn18/legalectra-small-spanish-becasv3-4 | 4 | null | transformers | 20,394 | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-4
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.6625 |
| No log | 2.0 | 10 | 5.4940 |
| No log | 3.0 | 15 | 5.3886 |
| No log | 4.0 | 20 | 5.3004 |
| No log | 5.0 | 25 | 5.2210 |
| No log | 6.0 | 30 | 5.1434 |
| No log | 7.0 | 35 | 5.0546 |
| No log | 8.0 | 40 | 4.9726 |
| No log | 9.0 | 45 | 4.9227 |
| No log | 10.0 | 50 | 4.8344 |
| No log | 11.0 | 55 | 4.7749 |
| No log | 12.0 | 60 | 4.7381 |
| No log | 13.0 | 65 | 4.7016 |
| No log | 14.0 | 70 | 4.6581 |
| No log | 15.0 | 75 | 4.6231 |
| No log | 16.0 | 80 | 4.5900 |
| No log | 17.0 | 85 | 4.5446 |
| No log | 18.0 | 90 | 4.5041 |
| No log | 19.0 | 95 | 4.4635 |
| No log | 20.0 | 100 | 4.4356 |
| No log | 21.0 | 105 | 4.3985 |
| No log | 22.0 | 110 | 4.3650 |
| No log | 23.0 | 115 | 4.3540 |
| No log | 24.0 | 120 | 4.3270 |
| No log | 25.0 | 125 | 4.2873 |
| No log | 26.0 | 130 | 4.2808 |
| No log | 27.0 | 135 | 4.2623 |
| No log | 28.0 | 140 | 4.2466 |
| No log | 29.0 | 145 | 4.2488 |
| No log | 30.0 | 150 | 4.2410 |
| No log | 31.0 | 155 | 4.2187 |
| No log | 32.0 | 160 | 4.2000 |
| No log | 33.0 | 165 | 4.1883 |
| No log | 34.0 | 170 | 4.1803 |
| No log | 35.0 | 175 | 4.1773 |
| No log | 36.0 | 180 | 4.1652 |
| No log | 37.0 | 185 | 4.1614 |
| No log | 38.0 | 190 | 4.1609 |
| No log | 39.0 | 195 | 4.1652 |
| No log | 40.0 | 200 | 4.1560 |
| No log | 41.0 | 205 | 4.1435 |
| No log | 42.0 | 210 | 4.1463 |
| No log | 43.0 | 215 | 4.1434 |
| No log | 44.0 | 220 | 4.1340 |
| No log | 45.0 | 225 | 4.1259 |
| No log | 46.0 | 230 | 4.1212 |
| No log | 47.0 | 235 | 4.1224 |
| No log | 48.0 | 240 | 4.1257 |
| No log | 49.0 | 245 | 4.1284 |
| No log | 50.0 | 250 | 4.1290 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ShihTing/KaggleAI4Code | 8ba68786289d76446ef1b2eaffe7bc4d618d80c7 | 2022-07-12T07:53:27.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ShihTing | null | ShihTing/KaggleAI4Code | 4 | null | transformers | 20,395 | Entry not found |
moonzi/distilbert-base-uncased-finetuned-cola | 7988dc2b68d1590fcdcba91c72e21a7e695b3bbf | 2022-07-12T09:35:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | moonzi | null | moonzi/distilbert-base-uncased-finetuned-cola | 4 | null | transformers | 20,396 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5383825234212567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
- Matthews Correlation: 0.5384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5217 | 1.0 | 535 | 0.5248 | 0.4152 |
| 0.3479 | 2.0 | 1070 | 0.5000 | 0.4855 |
| 0.2345 | 3.0 | 1605 | 0.5608 | 0.5384 |
| 0.1843 | 4.0 | 2140 | 0.7651 | 0.5224 |
| 0.1304 | 5.0 | 2675 | 0.8071 | 0.5370 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
suc155/distilbert-base-uncased-finetuned-sst2 | d615fac774af8323e6a6c7e7aec4d2d49f05a8c9 | 2022-07-12T12:43:16.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | suc155 | null | suc155/distilbert-base-uncased-finetuned-sst2 | 4 | null | transformers | 20,397 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9151376146788991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3056
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1827 | 1.0 | 4210 | 0.3056 | 0.9151 |
| 0.1235 | 2.0 | 8420 | 0.3575 | 0.9071 |
| 0.1009 | 3.0 | 12630 | 0.3896 | 0.9071 |
| 0.0561 | 4.0 | 16840 | 0.4810 | 0.9060 |
| 0.0406 | 5.0 | 21050 | 0.5375 | 0.9048 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
xuantsh/distilroberta-base-Mark_example | 37bdaa73b647a21fe40c249d1ca1c3b3d929c46c | 2022-07-12T13:13:45.000Z | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | xuantsh | null | xuantsh/distilroberta-base-Mark_example | 4 | null | transformers | 20,398 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-Mark_example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-Mark_example
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8299 | 1.0 | 744 | 2.6322 |
| 2.7034 | 2.0 | 1488 | 2.6514 |
| 2.5616 | 3.0 | 2232 | 2.6596 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anuj55/bert-base-nli-mean-tokens-finetuned-polifact | 0b905b689c6ec35614a0034aed4d015f39dcaaf5 | 2022-07-12T17:21:26.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | anuj55 | null | anuj55/bert-base-nli-mean-tokens-finetuned-polifact | 4 | null | transformers | 20,399 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.