modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 12:29:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 12:28:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
elopezlopez/distilbert-base-uncased_fold_5_binary_v1 | elopezlopez | 2022-08-02T23:02:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T22:48:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6980
- F1: 0.8110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4412 | 0.7981 |
| 0.396 | 2.0 | 576 | 0.4419 | 0.8078 |
| 0.396 | 3.0 | 864 | 0.4955 | 0.8166 |
| 0.2019 | 4.0 | 1152 | 0.6341 | 0.8075 |
| 0.2019 | 5.0 | 1440 | 1.0351 | 0.7979 |
| 0.0808 | 6.0 | 1728 | 1.1818 | 0.7844 |
| 0.0315 | 7.0 | 2016 | 1.2530 | 0.8051 |
| 0.0315 | 8.0 | 2304 | 1.3568 | 0.7937 |
| 0.0143 | 9.0 | 2592 | 1.4009 | 0.8045 |
| 0.0143 | 10.0 | 2880 | 1.5333 | 0.7941 |
| 0.0066 | 11.0 | 3168 | 1.5242 | 0.7982 |
| 0.0066 | 12.0 | 3456 | 1.5752 | 0.8050 |
| 0.0091 | 13.0 | 3744 | 1.5199 | 0.8046 |
| 0.0111 | 14.0 | 4032 | 1.5319 | 0.8117 |
| 0.0111 | 15.0 | 4320 | 1.5333 | 0.8156 |
| 0.0072 | 16.0 | 4608 | 1.5461 | 0.8192 |
| 0.0072 | 17.0 | 4896 | 1.5288 | 0.8252 |
| 0.0048 | 18.0 | 5184 | 1.5725 | 0.8078 |
| 0.0048 | 19.0 | 5472 | 1.5896 | 0.8138 |
| 0.0032 | 20.0 | 5760 | 1.6917 | 0.8071 |
| 0.0028 | 21.0 | 6048 | 1.6608 | 0.8109 |
| 0.0028 | 22.0 | 6336 | 1.7013 | 0.8122 |
| 0.0029 | 23.0 | 6624 | 1.6769 | 0.8148 |
| 0.0029 | 24.0 | 6912 | 1.6906 | 0.8100 |
| 0.0006 | 25.0 | 7200 | 1.6980 | 0.8110 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_4_binary_v1 | elopezlopez | 2022-08-02T22:47:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T22:34:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_4_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_4_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5144
- F1: 0.8245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.3756 | 0.8175 |
| 0.3977 | 2.0 | 578 | 0.3672 | 0.8336 |
| 0.3977 | 3.0 | 867 | 0.4997 | 0.8276 |
| 0.1972 | 4.0 | 1156 | 0.6597 | 0.8244 |
| 0.1972 | 5.0 | 1445 | 0.8501 | 0.8195 |
| 0.0824 | 6.0 | 1734 | 1.0074 | 0.8097 |
| 0.037 | 7.0 | 2023 | 1.1122 | 0.8131 |
| 0.037 | 8.0 | 2312 | 1.0963 | 0.8189 |
| 0.0182 | 9.0 | 2601 | 1.2511 | 0.8125 |
| 0.0182 | 10.0 | 2890 | 1.2255 | 0.8141 |
| 0.0121 | 11.0 | 3179 | 1.3120 | 0.8187 |
| 0.0121 | 12.0 | 3468 | 1.4182 | 0.8165 |
| 0.0079 | 13.0 | 3757 | 1.4142 | 0.8218 |
| 0.0081 | 14.0 | 4046 | 1.4765 | 0.8150 |
| 0.0081 | 15.0 | 4335 | 1.3510 | 0.8187 |
| 0.0109 | 16.0 | 4624 | 1.3455 | 0.8255 |
| 0.0109 | 17.0 | 4913 | 1.4157 | 0.8234 |
| 0.0022 | 18.0 | 5202 | 1.4651 | 0.8197 |
| 0.0022 | 19.0 | 5491 | 1.4388 | 0.8267 |
| 0.0017 | 20.0 | 5780 | 1.4552 | 0.8304 |
| 0.0005 | 21.0 | 6069 | 1.5357 | 0.8248 |
| 0.0005 | 22.0 | 6358 | 1.4924 | 0.8241 |
| 0.0009 | 23.0 | 6647 | 1.4865 | 0.8248 |
| 0.0009 | 24.0 | 6936 | 1.4697 | 0.8275 |
| 0.0013 | 25.0 | 7225 | 1.5144 | 0.8245 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_3_binary_v1 | elopezlopez | 2022-08-02T22:32:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T22:19:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_3_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_3_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9405
- F1: 0.7878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4630 | 0.7897 |
| 0.3954 | 2.0 | 578 | 0.4549 | 0.7936 |
| 0.3954 | 3.0 | 867 | 0.6527 | 0.7868 |
| 0.1991 | 4.0 | 1156 | 0.7510 | 0.7951 |
| 0.1991 | 5.0 | 1445 | 0.9327 | 0.8000 |
| 0.095 | 6.0 | 1734 | 1.0974 | 0.7859 |
| 0.0347 | 7.0 | 2023 | 1.2692 | 0.7919 |
| 0.0347 | 8.0 | 2312 | 1.3718 | 0.7921 |
| 0.0105 | 9.0 | 2601 | 1.4679 | 0.7999 |
| 0.0105 | 10.0 | 2890 | 1.5033 | 0.8070 |
| 0.0079 | 11.0 | 3179 | 1.6074 | 0.8008 |
| 0.0079 | 12.0 | 3468 | 1.6921 | 0.7904 |
| 0.0053 | 13.0 | 3757 | 1.7079 | 0.7945 |
| 0.0054 | 14.0 | 4046 | 1.8361 | 0.7887 |
| 0.0054 | 15.0 | 4335 | 1.7695 | 0.7873 |
| 0.0046 | 16.0 | 4624 | 1.7934 | 0.7917 |
| 0.0046 | 17.0 | 4913 | 1.8036 | 0.8008 |
| 0.0064 | 18.0 | 5202 | 1.8780 | 0.7888 |
| 0.0064 | 19.0 | 5491 | 1.8943 | 0.7923 |
| 0.0032 | 20.0 | 5780 | 1.8694 | 0.7905 |
| 0.002 | 21.0 | 6069 | 1.9348 | 0.7869 |
| 0.002 | 22.0 | 6358 | 1.9578 | 0.7804 |
| 0.0036 | 23.0 | 6647 | 1.9438 | 0.7827 |
| 0.0036 | 24.0 | 6936 | 1.9386 | 0.7878 |
| 0.0011 | 25.0 | 7225 | 1.9405 | 0.7878 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_binary_v1 | elopezlopez | 2022-08-02T22:17:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T22:03:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_binary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8833
- F1: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4060 | 0.8070 |
| 0.3981 | 2.0 | 580 | 0.4534 | 0.8072 |
| 0.3981 | 3.0 | 870 | 0.5460 | 0.7961 |
| 0.1985 | 4.0 | 1160 | 0.8684 | 0.7818 |
| 0.1985 | 5.0 | 1450 | 0.9009 | 0.7873 |
| 0.0844 | 6.0 | 1740 | 1.1529 | 0.7825 |
| 0.0329 | 7.0 | 2030 | 1.3185 | 0.7850 |
| 0.0329 | 8.0 | 2320 | 1.4110 | 0.7862 |
| 0.0109 | 9.0 | 2610 | 1.4751 | 0.7784 |
| 0.0109 | 10.0 | 2900 | 1.6276 | 0.7723 |
| 0.0071 | 11.0 | 3190 | 1.6779 | 0.7861 |
| 0.0071 | 12.0 | 3480 | 1.6258 | 0.7850 |
| 0.0041 | 13.0 | 3770 | 1.6324 | 0.7903 |
| 0.0109 | 14.0 | 4060 | 1.7563 | 0.7932 |
| 0.0109 | 15.0 | 4350 | 1.6740 | 0.7906 |
| 0.0079 | 16.0 | 4640 | 1.7468 | 0.7944 |
| 0.0079 | 17.0 | 4930 | 1.7095 | 0.7879 |
| 0.0067 | 18.0 | 5220 | 1.7293 | 0.7912 |
| 0.0021 | 19.0 | 5510 | 1.7875 | 0.7848 |
| 0.0021 | 20.0 | 5800 | 1.7462 | 0.7906 |
| 0.0026 | 21.0 | 6090 | 1.8549 | 0.7815 |
| 0.0026 | 22.0 | 6380 | 1.8314 | 0.7860 |
| 0.0021 | 23.0 | 6670 | 1.8577 | 0.7839 |
| 0.0021 | 24.0 | 6960 | 1.8548 | 0.7883 |
| 0.0001 | 25.0 | 7250 | 1.8833 | 0.7841 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sgraf202/finetuning-sentiment-model-3000-samples | sgraf202 | 2022-08-02T21:32:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-06-18T10:41:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7404
- Accuracy: 0.4688
- F1: 0.5526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aujer/autotrain-not_interested_2-1213045881 | aujer | 2022-08-02T21:15:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:aujer/autotrain-data-not_interested_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T21:14:05Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aujer/autotrain-data-not_interested_2
co2_eq_emissions:
emissions: 1.695519133475222
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1213045881
- CO2 Emissions (in grams): 1.6955
## Validation Metrics
- Loss: 1.607
- Accuracy: 0.535
- Macro F1: 0.306
- Micro F1: 0.535
- Weighted F1: 0.440
- Macro Precision: 0.346
- Micro Precision: 0.535
- Weighted Precision: 0.435
- Macro Recall: 0.345
- Micro Recall: 0.535
- Weighted Recall: 0.535
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aujer/autotrain-not_interested_2-1213045881
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aujer/autotrain-not_interested_2-1213045881", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aujer/autotrain-not_interested_2-1213045881", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
srcocotero/tiny-bert-qa | srcocotero | 2022-08-02T19:58:09Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-07-27T19:12:14Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: mini_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini_model
This model is a fine-tuned version of [nreimers/BERT-Tiny_L-2_H-128_A-2](https://huggingface.co/nreimers/BERT-Tiny_L-2_H-128_A-2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Rifky/indobert-hoax-classification | Rifky | 2022-08-02T19:32:31Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T16:42:51Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: indobert-hoax-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-hoax-classification
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6230
- Accuracy: 0.8059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.2173070213315e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 85 | 0.5540 | 0.7029 |
| No log | 2.0 | 170 | 0.5432 | 0.7029 |
| No log | 3.0 | 255 | 0.4963 | 0.7441 |
| No log | 4.0 | 340 | 0.5791 | 0.7971 |
| No log | 5.0 | 425 | 0.6230 | 0.8059 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_10_ternary_v1 | elopezlopez | 2022-08-02T18:22:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T18:09:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_10_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_10_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9887
- F1: 0.7797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.5701 | 0.7463 |
| 0.5651 | 2.0 | 580 | 0.5359 | 0.7748 |
| 0.5651 | 3.0 | 870 | 0.6043 | 0.7847 |
| 0.2605 | 4.0 | 1160 | 1.0124 | 0.7587 |
| 0.2605 | 5.0 | 1450 | 1.1140 | 0.7599 |
| 0.1223 | 6.0 | 1740 | 1.2713 | 0.7859 |
| 0.0469 | 7.0 | 2030 | 1.3188 | 0.7822 |
| 0.0469 | 8.0 | 2320 | 1.3819 | 0.7946 |
| 0.0279 | 9.0 | 2610 | 1.5444 | 0.7847 |
| 0.0279 | 10.0 | 2900 | 1.5851 | 0.7908 |
| 0.0084 | 11.0 | 3190 | 1.7003 | 0.7822 |
| 0.0084 | 12.0 | 3480 | 1.8148 | 0.7748 |
| 0.007 | 13.0 | 3770 | 1.7651 | 0.7748 |
| 0.008 | 14.0 | 4060 | 1.8423 | 0.7748 |
| 0.008 | 15.0 | 4350 | 1.7871 | 0.7809 |
| 0.0054 | 16.0 | 4640 | 1.9324 | 0.7748 |
| 0.0054 | 17.0 | 4930 | 1.8685 | 0.7809 |
| 0.0048 | 18.0 | 5220 | 1.9901 | 0.7797 |
| 0.002 | 19.0 | 5510 | 1.9273 | 0.7785 |
| 0.002 | 20.0 | 5800 | 1.9945 | 0.7809 |
| 0.0018 | 21.0 | 6090 | 1.9250 | 0.7785 |
| 0.0018 | 22.0 | 6380 | 1.9929 | 0.7822 |
| 0.0032 | 23.0 | 6670 | 1.9306 | 0.7859 |
| 0.0032 | 24.0 | 6960 | 1.9603 | 0.7847 |
| 0.0029 | 25.0 | 7250 | 1.9887 | 0.7797 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_9_ternary_v1 | elopezlopez | 2022-08-02T18:08:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T17:54:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_9_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_9_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9406
- F1: 0.7841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 292 | 0.5684 | 0.7635 |
| 0.5656 | 2.0 | 584 | 0.5753 | 0.7725 |
| 0.5656 | 3.0 | 876 | 0.6159 | 0.7866 |
| 0.2499 | 4.0 | 1168 | 0.7743 | 0.7828 |
| 0.2499 | 5.0 | 1460 | 0.9820 | 0.7674 |
| 0.1153 | 6.0 | 1752 | 1.2383 | 0.7738 |
| 0.0547 | 7.0 | 2044 | 1.2468 | 0.7815 |
| 0.0547 | 8.0 | 2336 | 1.3480 | 0.7622 |
| 0.0233 | 9.0 | 2628 | 1.3791 | 0.7892 |
| 0.0233 | 10.0 | 2920 | 1.4344 | 0.7841 |
| 0.0142 | 11.0 | 3212 | 1.4958 | 0.7802 |
| 0.0087 | 12.0 | 3504 | 1.5714 | 0.7674 |
| 0.0087 | 13.0 | 3796 | 1.6129 | 0.7956 |
| 0.0111 | 14.0 | 4088 | 1.7799 | 0.7751 |
| 0.0111 | 15.0 | 4380 | 1.7272 | 0.7789 |
| 0.0055 | 16.0 | 4672 | 1.7696 | 0.7866 |
| 0.0055 | 17.0 | 4964 | 1.8622 | 0.7789 |
| 0.003 | 18.0 | 5256 | 1.8563 | 0.7802 |
| 0.0004 | 19.0 | 5548 | 1.8993 | 0.7815 |
| 0.0004 | 20.0 | 5840 | 1.9199 | 0.7853 |
| 0.0005 | 21.0 | 6132 | 1.9003 | 0.7879 |
| 0.0005 | 22.0 | 6424 | 1.9161 | 0.7828 |
| 0.0011 | 23.0 | 6716 | 1.9691 | 0.7815 |
| 0.0017 | 24.0 | 7008 | 1.9492 | 0.7841 |
| 0.0017 | 25.0 | 7300 | 1.9406 | 0.7841 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ArunkumarCH/DeepLearning | ArunkumarCH | 2022-08-02T17:54:25Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-08-02T17:53:42Z | About this DeepLearning Model:
We will build an front end application to upload the image and get the deeplearning model predicts the name of the object with acccuracy.
Steps for building the Image classification model:
1. Image classification model using pretrained DL model
1.1 Define deeplearning model
2.2 Preprocess the data
3.3 Get prediction
1.1 Define deep learning model
# import required modules
import json
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
# import pytorch related modules
import torch
from torchvision import transforms
from torchvision.models import densenet121
# define pretrained DL model
model = densenet121(pretrained=True)
model.eval();
1.2 Preprocess data
# load image using PIL
input_image = Image.open(filename)
# preprocess image according to the pretrained model
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
# create a mini-batch as expected by the model
input_batch = input_tensor.unsqueeze(0)
# pass input batch to the model
with torch.no_grad():
output = model(input_batch)
1.3 Get prediction
pred = torch.nn.functional.softmax(output[0], dim=0).cpu().numpy()
np.argmax(pred)
# download classes on which the model was trained on
!wget https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
# get the prediction accuracy
print(classes[str(np.argmax(pred))][1], round(max(pred)*100, 2))
2. Deploying Image Classification model
1.1 Install required libraries
1.2 Setup DL model using streamlit
1.3 Deploy DL model on AWS/Colab/HF spaces
1.1 Install required libraries
!pip install -q streamlit
!pip install -q pyngrok
1.2 Setup DL model using streamlit
%%writefile app.py
## create streamlit app
# import required libraries and modules
import json
import numpy as np
import matplotlib.pyplot as plt
import torch
from PIL import Image
from torchvision import transforms
from torchvision.models import densenet121
import streamlit as st
# define prediction function
def predict(image):
# load DL model
model = densenet121(pretrained=True)
model.eval()
# load classes
with open('imagenet_class_index.json', 'r') as f:
classes = json.load(f)
# preprocess image
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
# get prediction
with torch.no_grad():
output = model(input_batch)
pred = torch.nn.functional.softmax(output[0], dim=0).cpu().numpy()
# return confidence and label
confidence = round(max(pred)*100, 2)
label = classes[str(np.argmax(pred))][1]
return confidence, label
# define image file uploader
image = st.file_uploader("Upload image here")
# define button for getting prediction
if image is not None and st.button("Get prediction"):
# load image using PIL
input_image = Image.open(image)
# show image
st.image(input_image, use_column_width=True)
# get prediction
confidence, label = predict(input_image)
# print results
"Model is", confidence, "% confident that this image is of a", label
1.3 Deploy DL model
# run streamlit app
!streamlit run app.py &>/dev/null&
# make streamlit app available publicly
from pyngrok import ngrok
public_url = ngrok.connect('8501');
public_url
Model can be deployed on AWS/Colab/Flask/Hugging Spaces
Hugging spaces model
https://huggingface.co/spaces/ArunkumarCH/BirdClassification |
elopezlopez/distilbert-base-uncased_fold_8_ternary_v1 | elopezlopez | 2022-08-02T17:53:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T17:40:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_8_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_8_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8474
- F1: 0.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5398 | 0.7838 |
| 0.5509 | 2.0 | 578 | 0.6062 | 0.7703 |
| 0.5509 | 3.0 | 867 | 0.6563 | 0.7666 |
| 0.2366 | 4.0 | 1156 | 0.7688 | 0.7961 |
| 0.2366 | 5.0 | 1445 | 1.0968 | 0.7690 |
| 0.1247 | 6.0 | 1734 | 1.1414 | 0.7924 |
| 0.0482 | 7.0 | 2023 | 1.2159 | 0.7875 |
| 0.0482 | 8.0 | 2312 | 1.2703 | 0.7887 |
| 0.0245 | 9.0 | 2601 | 1.3401 | 0.7985 |
| 0.0245 | 10.0 | 2890 | 1.4645 | 0.7961 |
| 0.0149 | 11.0 | 3179 | 1.5632 | 0.7801 |
| 0.0149 | 12.0 | 3468 | 1.5249 | 0.7875 |
| 0.0124 | 13.0 | 3757 | 1.6263 | 0.7948 |
| 0.0038 | 14.0 | 4046 | 1.8059 | 0.7764 |
| 0.0038 | 15.0 | 4335 | 1.7649 | 0.7776 |
| 0.0061 | 16.0 | 4624 | 1.8293 | 0.7850 |
| 0.0061 | 17.0 | 4913 | 1.8316 | 0.7887 |
| 0.0022 | 18.0 | 5202 | 1.7628 | 0.7973 |
| 0.0022 | 19.0 | 5491 | 1.8763 | 0.7862 |
| 0.002 | 20.0 | 5780 | 1.8409 | 0.7899 |
| 0.0026 | 21.0 | 6069 | 1.8146 | 0.8022 |
| 0.0026 | 22.0 | 6358 | 1.8420 | 0.7973 |
| 0.0008 | 23.0 | 6647 | 1.8683 | 0.8010 |
| 0.0008 | 24.0 | 6936 | 1.8571 | 0.8010 |
| 0.0015 | 25.0 | 7225 | 1.8474 | 0.8022 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_5_ternary_v1 | elopezlopez | 2022-08-02T17:10:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T16:56:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1368
- F1: 0.7682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.6423 | 0.7465 |
| 0.5563 | 2.0 | 582 | 0.6001 | 0.7631 |
| 0.5563 | 3.0 | 873 | 0.6884 | 0.7785 |
| 0.2595 | 4.0 | 1164 | 0.9920 | 0.7439 |
| 0.2595 | 5.0 | 1455 | 1.1434 | 0.7631 |
| 0.1159 | 6.0 | 1746 | 1.3289 | 0.7606 |
| 0.0473 | 7.0 | 2037 | 1.3966 | 0.7708 |
| 0.0473 | 8.0 | 2328 | 1.4761 | 0.7606 |
| 0.0282 | 9.0 | 2619 | 1.6144 | 0.7542 |
| 0.0282 | 10.0 | 2910 | 1.5642 | 0.7695 |
| 0.0134 | 11.0 | 3201 | 1.7206 | 0.7593 |
| 0.0134 | 12.0 | 3492 | 1.8008 | 0.7542 |
| 0.0059 | 13.0 | 3783 | 1.8056 | 0.7746 |
| 0.002 | 14.0 | 4074 | 1.9160 | 0.7593 |
| 0.002 | 15.0 | 4365 | 2.0223 | 0.7606 |
| 0.0052 | 16.0 | 4656 | 1.9112 | 0.7810 |
| 0.0052 | 17.0 | 4947 | 1.9040 | 0.7772 |
| 0.0056 | 18.0 | 5238 | 1.9852 | 0.7734 |
| 0.0061 | 19.0 | 5529 | 2.0590 | 0.7644 |
| 0.0061 | 20.0 | 5820 | 2.1078 | 0.7631 |
| 0.0044 | 21.0 | 6111 | 2.1177 | 0.7631 |
| 0.0044 | 22.0 | 6402 | 2.0983 | 0.7644 |
| 0.0012 | 23.0 | 6693 | 2.1384 | 0.7670 |
| 0.0012 | 24.0 | 6984 | 2.1467 | 0.7657 |
| 0.0018 | 25.0 | 7275 | 2.1368 | 0.7682 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrm8488/dqn-SpaceInvadersNoFrameskip-v4-2 | mrm8488 | 2022-08-02T17:00:07Z | 6 | 1 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-02T16:59:39Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 181.00 +/- 111.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrm8488 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrm8488
```
## Hyperparameters
```python
OrderedDict([('batch_size', 1024),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 800000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
elopezlopez/distilbert-base-uncased_fold_4_ternary_v1 | elopezlopez | 2022-08-02T16:55:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T16:42:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_4_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_4_ternary_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9355
- F1: 0.7891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5637 | 0.7485 |
| 0.5729 | 2.0 | 578 | 0.5305 | 0.7805 |
| 0.5729 | 3.0 | 867 | 0.6948 | 0.7670 |
| 0.2548 | 4.0 | 1156 | 0.8351 | 0.7744 |
| 0.2548 | 5.0 | 1445 | 1.0005 | 0.8027 |
| 0.1157 | 6.0 | 1734 | 1.1578 | 0.7978 |
| 0.0473 | 7.0 | 2023 | 1.2275 | 0.7953 |
| 0.0473 | 8.0 | 2312 | 1.3245 | 0.7916 |
| 0.0276 | 9.0 | 2601 | 1.3728 | 0.7953 |
| 0.0276 | 10.0 | 2890 | 1.4577 | 0.7867 |
| 0.0149 | 11.0 | 3179 | 1.5832 | 0.7731 |
| 0.0149 | 12.0 | 3468 | 1.5056 | 0.7818 |
| 0.0143 | 13.0 | 3757 | 1.6263 | 0.7904 |
| 0.0066 | 14.0 | 4046 | 1.6596 | 0.7793 |
| 0.0066 | 15.0 | 4335 | 1.6795 | 0.7941 |
| 0.0022 | 16.0 | 4624 | 1.8443 | 0.7744 |
| 0.0022 | 17.0 | 4913 | 1.7160 | 0.7953 |
| 0.0034 | 18.0 | 5202 | 1.7819 | 0.7781 |
| 0.0034 | 19.0 | 5491 | 1.7931 | 0.7904 |
| 0.0036 | 20.0 | 5780 | 1.8447 | 0.7818 |
| 0.0014 | 21.0 | 6069 | 1.9975 | 0.7707 |
| 0.0014 | 22.0 | 6358 | 1.9324 | 0.7830 |
| 0.0008 | 23.0 | 6647 | 1.9086 | 0.7842 |
| 0.0008 | 24.0 | 6936 | 1.9507 | 0.7867 |
| 0.0002 | 25.0 | 7225 | 1.9355 | 0.7891 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jinghan/deberta-base-finetuned-wnli | jinghan | 2022-08-02T15:47:58Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T14:56:26Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-wnli
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6926
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6926 | 0.5634 |
| No log | 2.0 | 80 | 0.6911 | 0.5634 |
| No log | 3.0 | 120 | 0.6903 | 0.5634 |
| No log | 4.0 | 160 | 0.6905 | 0.5634 |
| No log | 5.0 | 200 | 0.6904 | 0.5634 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ligerre/xlm-roberta-base-finetuned-panx-fr | ligerre | 2022-08-02T15:32:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T15:15:14Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8299296953465015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2848
- F1: 0.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5989 | 1.0 | 191 | 0.3383 | 0.7928 |
| 0.2617 | 2.0 | 382 | 0.2966 | 0.8318 |
| 0.1672 | 3.0 | 573 | 0.2848 | 0.8299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/iamsamirarora-naval-vivek_investor | huggingtweets | 2022-08-02T15:16:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-08-02T15:15:22Z | ---
language: en
thumbnail: http://www.huggingtweets.com/iamsamirarora-naval-vivek_investor/1659453403535/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/853146176295759872/YiAPXQ0s_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479277051802574853/qs6u-imt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Naval & Samir Arora & Vivek</div>
<div style="text-align: center; font-size: 14px;">@iamsamirarora-naval-vivek_investor</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Naval & Samir Arora & Vivek.
| Data | Naval | Samir Arora | Vivek |
| --- | --- | --- | --- |
| Tweets downloaded | 3211 | 3250 | 3250 |
| Retweets | 195 | 76 | 96 |
| Short tweets | 612 | 973 | 601 |
| Tweets kept | 2404 | 2201 | 2553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1oa4j8zi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamsamirarora-naval-vivek_investor's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21s56oiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21s56oiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iamsamirarora-naval-vivek_investor')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ligerre/xlm-roberta-base-finetuned-panx-de-fr | ligerre | 2022-08-02T15:09:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T14:48:44Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 |
| 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 |
| 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ligerre/xlm-roberta-base-finetuned-panx-de | ligerre | 2022-08-02T14:39:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T14:16:11Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aliromagnoli/distilbert-base-uncased-finetuned-emotion | aliromagnoli | 2022-08-02T14:26:32Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T13:13:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9238827602069696
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2145
- Accuracy: 0.924
- F1: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3050 | 0.9085 | 0.9063 |
| 0.2456 | 2.0 | 500 | 0.2145 | 0.924 | 0.9239 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
anjleeg/roberta-base-finetuned-cola | anjleeg | 2022-08-02T14:02:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T12:51:41Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: roberta-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4497
- Matthews Correlation: 0.6272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4453 | 1.0 | 133 | 0.4348 | 0.5391 |
| 0.3121 | 2.0 | 266 | 0.3938 | 0.5827 |
| 0.1149 | 3.0 | 399 | 0.4497 | 0.6272 |
| 0.1194 | 4.0 | 532 | 0.5005 | 0.6076 |
| 0.1639 | 5.0 | 665 | 0.5645 | 0.5943 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
s-nlp/GenChal_2022_nigula | s-nlp | 2022-08-02T13:43:11Z | 11 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"feedback comment generation for writing learning",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-08T15:17:59Z | ---
language:
- en
tags:
- feedback comment generation for writing learning
licenses:
- cc-by-nc-sa
---
## Model overview
This model was trained in terms of [GenChal 2022: Feedback Comment Generation for Writing Learning](https://fcg.sharedtask.org/) shared task
In this task, the model gets the string with text with the error and the exact span of the error and should return the comment in natural language, which explains the nature of the error.
## How to use
```python
!pip install feedback_generation_nigula
from feedback_generation_nigula.generator import FeedbackGenerator
fg = FeedbackGenerator(cuda_index = 0)
text_with_error = "The smoke flow my face ."
error_span = (10,17)
fg.get_feedback([text_with_error ], [error_span ])
# expected output ["When the <verb> <<flow>> is used as an <intransitive verb> to express'' to move in a stream'', a <preposition> needs to be placed to indicate the direction"]
```
## Model training details
#### Data
The data was provided in the following way
```
input sentence [\t] offset range [\t] feedback comment
```
Here are some examples
```
The smoke flow my face . 10:17 When the <verb> <<flow>> is used as an <intransitive verb> to express ''to move in a stream'', a <preposition> needs to be placed to indicate the direction. 'To' and 'towards' are <prepositions> that indicate direction.
I want to stop smoking during driving bicycle . 23:29 A <gerund> does not normally follow the <preposition> <<during>>. Think of an expression using the <conjunction> 'while' instead of a <preposition>.
```
Grammar termins are highlighted with '< ... >' marks and word examples - with '<< ... >>'
#### Data preprocessing
We lowercased the text, split it from any punctuation, including task specific marks (<< >>) and explicitly pointed out the error in the original text using << >>.
```
the smoke < < flow > > < < my > > face . 10:17 When the < verb > < < flow > > is used as an < intransitive verb > to express '' to move in a stream '', a < preposition > needs to be placed to indicate the direction. ' to ' and ' towards ' are < prepositions > that indicate direction .
i want to stop smoking < < during > > driving bicycle . 23:29 a < gerund > does not normally follow the < preposition > < < during > > . think of an expression using the < conjunction > ' while ' instead of a < preposition > .
```
#### Data augmentation
The main feature of our training pipeline was data augmentation. The idea of the augmentation is as follows: we cut the existing text with error after the last word which was syntactically connected to the words inside the error span (syntactic dependencies were automatically parsed with spacy) and this cut version of the text with error was used as a prompt for language model (we used [GPT-Neo 1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B)).
Using both initial and augmented data we fine-tuned [t5-large](https://huggingface.co/t5-large).
## Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png |
dandelin/vilt-b32-finetuned-vqa | dandelin | 2022-08-02T13:03:04Z | 105,438 | 400 | transformers | [
"transformers",
"pytorch",
"vilt",
"visual-question-answering",
"arxiv:2102.03334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| visual-question-answering | 2022-03-02T23:29:05Z | ---
tags:
- visual-question-answering
license: apache-2.0
widget:
- text: "What's the animal doing?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"
- text: "What is on top of the building?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"
---
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
Vision-and-Language Transformer (ViLT) model fine-tuned on [VQAv2](https://visualqa.org/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
``` |
sepidmnorozy/finetuned-sentiment-withGPU | sepidmnorozy | 2022-08-02T12:33:09Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-04T13:26:21Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-model_withGPU
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-10-samples_withGPU
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3893
- Accuracy: 0.8744
- F1: 0.8684
- Precision: 0.9126
- Recall: 0.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3631 | 1.0 | 7088 | 0.3622 | 0.8638 | 0.8519 | 0.9334 | 0.7835 |
| 0.35 | 2.0 | 14176 | 0.3875 | 0.8714 | 0.8622 | 0.9289 | 0.8044 |
| 0.3262 | 3.0 | 21264 | 0.3893 | 0.8744 | 0.8684 | 0.9126 | 0.8283 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pannaga/wav2vec2-base-timit-demo-google-colab-testing | pannaga | 2022-08-02T12:18:36Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-21T10:06:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-testing
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
wenkai-li/distilbert-base-uncased-finetuned-wikiandmark_epoch50 | wenkai-li | 2022-08-02T12:11:19Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T11:02:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-wikiandmark_epoch50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wikiandmark_epoch50
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0049
- eval_accuracy: 0.9995
- eval_runtime: 29.1585
- eval_samples_per_second: 127.613
- eval_steps_per_second: 4.013
- epoch: 6.0
- step: 4656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dfsj/distilbert-base-uncased-distilled-clinc | dfsj | 2022-08-02T11:38:29Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T00:46:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9448387096774193
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3163
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.3518 | 0.7510 |
| 2.7559 | 2.0 | 636 | 1.2235 | 0.8506 |
| 2.7559 | 3.0 | 954 | 0.6786 | 0.9168 |
| 1.0767 | 4.0 | 1272 | 0.4668 | 0.9368 |
| 0.4584 | 5.0 | 1590 | 0.3810 | 0.9410 |
| 0.4584 | 6.0 | 1908 | 0.3479 | 0.9435 |
| 0.2876 | 7.0 | 2226 | 0.3282 | 0.9455 |
| 0.2285 | 8.0 | 2544 | 0.3201 | 0.9452 |
| 0.2285 | 9.0 | 2862 | 0.3163 | 0.9448 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
|
spacestar1705/Reinforce-CartPole-v1 | spacestar1705 | 2022-08-02T10:58:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-02T10:50:12Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 92.70 +/- 7.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Tarkan/cikolata-finetuned-hastalik | Tarkan | 2022-08-02T10:47:46Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T10:07:32Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Tarkan/cikolata-finetuned-hastalik
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tarkan/cikolata-finetuned-hastalik
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1853
- Validation Loss: 0.0921
- Train Precision: 0.6410
- Train Recall: 0.7388
- Train F1: 0.6864
- Train Accuracy: 0.9686
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 339, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1853 | 0.0921 | 0.6410 | 0.7388 | 0.6864 | 0.9686 | 0 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
JmPaunlagui/Improve | JmPaunlagui | 2022-08-02T10:17:55Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2022-08-02T09:42:09Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|Adam|0.001|0.0|0.9|0.999|1e-07|False|float32|
|
DrY/bert-finetuned-squad | DrY | 2022-08-02T10:16:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-08-02T07:52:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
gustavhartz/roberta-base-cuad-finetuned | gustavhartz | 2022-08-02T09:11:38Z | 320 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-06-26T13:51:35Z | ---
language: en
datasets:
- cuad
---
# Finetuned legal contract review QA model based 👩⚖️ 📑
Best model presented in the master thesis [*Exploring CUAD using RoBERTa span-selection QA models for legal contract review*](https://github.com/gustavhartz/transformers-legal-tasks) for QA on the Contract Understanding Atticus Dataset. Full training logic and associated thesis available through link.
Outperform the most popular HF cuad model [Rakib/roberta-base-on-cuad](https://huggingface.co/Rakib/roberta-base-on-cuad) and is the best model for CUAD on Hugging Face 26/06/2022
| **Model name** | **Top 1 Has Ans F1** | **Top 3 Has Ans F1** |
|-----------------------------------------|----------------------|----------------------|
| gustavhartz/roberta-base-cuad-finetuned | 85.68 | 94.06 |
| Rakib/roberta-base-on-cuad | 81.26 | 92.48 |
For questions etc. go through the Github repo :)
### Citation
If you found the code of thesis helpful you can please cite it :)
```
@thesis{ha2022,
author = {Hartz, Gustav Selfort},
title = {Exploring CUAD using RoBERTa span-selection QA models for legal contract review},
language = {English},
format = {thesis},
year = {2022},
publisher = {DTU Department of Applied Mathematics and Computer Science}
}
``` |
ysnow9876/alephbert-base-finetuned-for-shut | ysnow9876 | 2022-08-02T09:11:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"language model",
"he",
"dataset:responsa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-06-30T11:28:30Z | ---
language:
- he
tags:
- language model
datasets:
- responsa
---
**AlephBERT-base-finetuned-for-shut**
**Hebrew Language Model**
Based on alephbert-base: https://huggingface.co/onlplab/alephbert-base#alephbert
**How to use:**
from transformers import AutoModelForMaskedLM, AutoTokenizer
checkpoint = 'ysnow9876/alephbert-base-finetuned-for-shut'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model= AutoModelForMaskedLM.from_pretrained(checkpoint)
#if not finetuning - disable dropout
model.eval()
**Training Data**
about 26,000 different responsa from different rabbis from the past few hundred years
|
th1s1s1t/Reinforce-cartbole-v1 | th1s1s1t | 2022-08-02T09:07:16Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-02T09:02:44Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartbole-v1
results:
- metrics:
- type: mean_reward
value: 255.40 +/- 10.01
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
jinghan/roberta-base-finetuned-wnli | jinghan | 2022-08-02T09:04:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-02T08:49:05Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wnli
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6880 | 0.5634 |
| No log | 2.0 | 80 | 0.6851 | 0.5634 |
| No log | 3.0 | 120 | 0.6961 | 0.4366 |
| No log | 4.0 | 160 | 0.6906 | 0.5634 |
| No log | 5.0 | 200 | 0.6891 | 0.5634 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
commanderstrife/PV-Bio_clinicalBERT-superset | commanderstrife | 2022-08-02T08:58:17Z | 7 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:pv_dataset",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T05:36:04Z | ---
tags:
- generated_from_trainer
datasets:
- pv_dataset
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: PV-Bio_clinicalBERT-superset
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: pv_dataset
type: pv_dataset
config: PVDatasetCorpus
split: train
args: PVDatasetCorpus
metrics:
- name: Precision
type: precision
value: 0.7055946686730801
- name: Recall
type: recall
value: 0.7473672226333467
- name: F1
type: f1
value: 0.7258804666334938
- name: Accuracy
type: accuracy
value: 0.9656573815513143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PV-Bio_clinicalBERT-superset
This model is a fine-tuned version of [giacomomiolo/electramed_base_scivocab_1M](https://huggingface.co/giacomomiolo/electramed_base_scivocab_1M) on the pv_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2082
- Precision: 0.7056
- Recall: 0.7474
- F1: 0.7259
- Accuracy: 0.9657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.063 | 1.0 | 1813 | 0.1061 | 0.6453 | 0.7306 | 0.6853 | 0.9623 |
| 0.0086 | 2.0 | 3626 | 0.1068 | 0.6620 | 0.7516 | 0.7040 | 0.9647 |
| 0.0089 | 3.0 | 5439 | 0.1265 | 0.7026 | 0.7300 | 0.7160 | 0.9657 |
| 0.004 | 4.0 | 7252 | 0.1369 | 0.6820 | 0.7601 | 0.7189 | 0.9638 |
| 0.0004 | 5.0 | 9065 | 0.1573 | 0.6937 | 0.7602 | 0.7254 | 0.9656 |
| 0.0184 | 6.0 | 10878 | 0.1707 | 0.7078 | 0.7475 | 0.7271 | 0.9662 |
| 0.0009 | 7.0 | 12691 | 0.1787 | 0.7116 | 0.7398 | 0.7254 | 0.9662 |
| 0.0006 | 8.0 | 14504 | 0.1874 | 0.6979 | 0.7576 | 0.7265 | 0.9655 |
| 0.0008 | 9.0 | 16317 | 0.1970 | 0.7083 | 0.7475 | 0.7273 | 0.9660 |
| 0.0003 | 10.0 | 18130 | 0.2082 | 0.7056 | 0.7474 | 0.7259 | 0.9657 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train | silviacamplani | 2022-08-02T08:55:52Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-02T08:54:21Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-base-uncased-finetuned-ner-conll2003_100train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4072
- Validation Loss: 1.4582
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.7920
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.0837 | 1.8526 | 0.0013 | 0.0015 | 0.0014 | 0.7006 | 0 |
| 1.6450 | 1.5672 | 0.0 | 0.0 | 0.0 | 0.7916 | 1 |
| 1.4072 | 1.4582 | 0.0 | 0.0 | 0.0 | 0.7920 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ajders/nl_electra | ajders | 2022-08-02T08:43:24Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-07T13:48:39Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nl_electra
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl_electra
This model is a pretrained version of [ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra) on the Dutch subset of the [CC100](https://huggingface.co/datasets/cc100) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4650
- Accuracy: 0.5392
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 703
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 400.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:--------:|
| No log | 0.67 | 500 | 9.9977 | 0.0486 |
| No log | 1.35 | 1000 | 9.5620 | 0.0543 |
| No log | 2.02 | 1500 | 8.9306 | 0.0741 |
| No log | 2.69 | 2000 | 8.2617 | 0.0826 |
| No log | 3.36 | 2500 | 7.6880 | 0.0792 |
| No log | 4.04 | 3000 | 7.3316 | 0.0757 |
| No log | 4.71 | 3500 | 7.1944 | 0.0747 |
| No log | 5.38 | 4000 | 7.1349 | 0.0802 |
| No log | 6.06 | 4500 | 7.0752 | 0.0887 |
| 8.201 | 6.73 | 5000 | 7.0046 | 0.1021 |
| 8.201 | 7.4 | 5500 | 6.9295 | 0.1090 |
| 8.201 | 8.08 | 6000 | 6.8483 | 0.1132 |
| 8.201 | 8.75 | 6500 | 6.7750 | 0.1171 |
| 8.201 | 9.42 | 7000 | 6.7116 | 0.1187 |
| 8.201 | 10.09 | 7500 | 6.6560 | 0.1218 |
| 8.201 | 10.77 | 8000 | 6.6178 | 0.1239 |
| 8.201 | 11.44 | 8500 | 6.5824 | 0.1255 |
| 8.201 | 12.11 | 9000 | 6.5521 | 0.1273 |
| 8.201 | 12.79 | 9500 | 6.5203 | 0.1292 |
| 6.7257 | 13.46 | 10000 | 6.5027 | 0.1303 |
| 6.7257 | 14.13 | 10500 | 6.4809 | 0.1314 |
| 6.7257 | 14.8 | 11000 | 6.4631 | 0.1322 |
| 6.7257 | 15.48 | 11500 | 6.4483 | 0.1329 |
| 6.7257 | 16.15 | 12000 | 6.4320 | 0.1338 |
| 6.7257 | 16.82 | 12500 | 6.4169 | 0.1348 |
| 6.7257 | 17.5 | 13000 | 6.4067 | 0.1359 |
| 6.7257 | 18.17 | 13500 | 6.3994 | 0.1359 |
| 6.7257 | 18.84 | 14000 | 6.3823 | 0.1368 |
| 6.7257 | 19.52 | 14500 | 6.3759 | 0.1373 |
| 6.4502 | 20.19 | 15000 | 6.3629 | 0.1374 |
| 6.4502 | 20.86 | 15500 | 6.3638 | 0.1373 |
| 6.4502 | 21.53 | 16000 | 6.3505 | 0.1382 |
| 6.4502 | 22.21 | 16500 | 6.3416 | 0.1387 |
| 6.4502 | 22.88 | 17000 | 6.3420 | 0.1383 |
| 6.4502 | 23.55 | 17500 | 6.3330 | 0.1389 |
| 6.4502 | 24.23 | 18000 | 6.3289 | 0.1388 |
| 6.4502 | 24.9 | 18500 | 6.3184 | 0.1389 |
| 6.4502 | 25.57 | 19000 | 6.3099 | 0.1396 |
| 6.4502 | 26.24 | 19500 | 6.2789 | 0.1405 |
| 6.3474 | 26.92 | 20000 | 6.2398 | 0.1404 |
| 6.3474 | 27.59 | 20500 | 6.2012 | 0.1412 |
| 6.3474 | 28.26 | 21000 | 6.1803 | 0.1414 |
| 6.3474 | 28.94 | 21500 | 6.1579 | 0.1414 |
| 6.3474 | 29.61 | 22000 | 6.1403 | 0.1431 |
| 6.3474 | 30.28 | 22500 | 6.1316 | 0.1423 |
| 6.3474 | 30.96 | 23000 | 6.1102 | 0.1435 |
| 6.3474 | 31.63 | 23500 | 6.0998 | 0.1439 |
| 6.3474 | 32.3 | 24000 | 6.0867 | 0.1446 |
| 6.3474 | 32.97 | 24500 | 6.0700 | 0.1451 |
| 6.1758 | 33.65 | 25000 | 6.0554 | 0.1457 |
| 6.1758 | 34.32 | 25500 | 6.0487 | 0.1457 |
| 6.1758 | 34.99 | 26000 | 6.0328 | 0.1469 |
| 6.1758 | 35.67 | 26500 | 6.0265 | 0.1469 |
| 6.1758 | 36.34 | 27000 | 5.9992 | 0.1486 |
| 6.1758 | 37.01 | 27500 | 5.9934 | 0.1485 |
| 6.1758 | 37.68 | 28000 | 5.9702 | 0.1501 |
| 6.1758 | 38.36 | 28500 | 5.9661 | 0.1503 |
| 6.1758 | 39.03 | 29000 | 5.9558 | 0.1512 |
| 6.1758 | 39.7 | 29500 | 5.9321 | 0.1528 |
| 6.052 | 40.38 | 30000 | 5.9147 | 0.1532 |
| 6.052 | 41.05 | 30500 | 5.8975 | 0.1545 |
| 6.052 | 41.72 | 31000 | 5.8784 | 0.1566 |
| 6.052 | 42.4 | 31500 | 5.8584 | 0.1586 |
| 6.052 | 43.07 | 32000 | 5.8325 | 0.1603 |
| 6.052 | 43.74 | 32500 | 5.7583 | 0.1664 |
| 6.052 | 44.41 | 33000 | 5.6158 | 0.1787 |
| 6.052 | 45.09 | 33500 | 5.4580 | 0.1917 |
| 6.052 | 45.76 | 34000 | 5.3396 | 0.2010 |
| 6.052 | 46.43 | 34500 | 5.2568 | 0.2082 |
| 5.7995 | 47.11 | 35000 | 5.1775 | 0.2146 |
| 5.7995 | 47.78 | 35500 | 5.1076 | 0.2204 |
| 5.7995 | 48.45 | 36000 | 5.0457 | 0.2258 |
| 5.7995 | 49.13 | 36500 | 4.9932 | 0.2313 |
| 5.7995 | 49.8 | 37000 | 4.9164 | 0.2384 |
| 5.7995 | 50.47 | 37500 | 4.7844 | 0.2521 |
| 5.7995 | 51.14 | 38000 | 4.6598 | 0.2642 |
| 5.7995 | 51.82 | 38500 | 4.5472 | 0.2757 |
| 5.7995 | 52.49 | 39000 | 4.4374 | 0.2871 |
| 5.7995 | 53.16 | 39500 | 4.3399 | 0.2982 |
| 5.0341 | 53.84 | 40000 | 4.2549 | 0.3083 |
| 5.0341 | 54.51 | 40500 | 4.1795 | 0.3170 |
| 5.0341 | 55.18 | 41000 | 4.1017 | 0.3274 |
| 5.0341 | 55.85 | 41500 | 4.0308 | 0.3375 |
| 5.0341 | 56.53 | 42000 | 3.9673 | 0.3462 |
| 5.0341 | 57.2 | 42500 | 3.9130 | 0.3538 |
| 5.0341 | 57.87 | 43000 | 3.8672 | 0.3599 |
| 5.0341 | 58.55 | 43500 | 3.8249 | 0.3656 |
| 5.0341 | 59.22 | 44000 | 3.7748 | 0.3728 |
| 5.0341 | 59.89 | 44500 | 3.7459 | 0.3768 |
| 4.2119 | 60.57 | 45000 | 3.7089 | 0.3808 |
| 4.2119 | 61.24 | 45500 | 3.6732 | 0.3857 |
| 4.2119 | 61.91 | 46000 | 3.6546 | 0.3881 |
| 4.2119 | 62.58 | 46500 | 3.6205 | 0.3921 |
| 4.2119 | 63.26 | 47000 | 3.5908 | 0.3960 |
| 4.2119 | 63.93 | 47500 | 3.5627 | 0.3991 |
| 4.2119 | 64.6 | 48000 | 3.5466 | 0.4019 |
| 4.2119 | 65.28 | 48500 | 3.5262 | 0.4039 |
| 4.2119 | 65.95 | 49000 | 3.4987 | 0.4074 |
| 4.2119 | 66.62 | 49500 | 3.4817 | 0.4093 |
| 3.8182 | 67.29 | 50000 | 3.4608 | 0.4119 |
| 3.8182 | 67.97 | 50500 | 3.4467 | 0.4142 |
| 3.8182 | 68.64 | 51000 | 3.4280 | 0.4163 |
| 3.8182 | 69.31 | 51500 | 3.4165 | 0.4175 |
| 3.8182 | 69.99 | 52000 | 3.3970 | 0.4199 |
| 3.8182 | 70.66 | 52500 | 3.3738 | 0.4227 |
| 3.8182 | 71.33 | 53000 | 3.3640 | 0.4242 |
| 3.8182 | 72.01 | 53500 | 3.3583 | 0.4252 |
| 3.8182 | 72.68 | 54000 | 3.3319 | 0.4279 |
| 3.8182 | 73.35 | 54500 | 3.3153 | 0.4303 |
| 3.5946 | 74.02 | 55000 | 3.3098 | 0.4304 |
| 3.5946 | 74.7 | 55500 | 3.2949 | 0.4328 |
| 3.5946 | 75.37 | 56000 | 3.2820 | 0.4335 |
| 3.5946 | 76.04 | 56500 | 3.2686 | 0.4355 |
| 3.5946 | 76.72 | 57000 | 3.2663 | 0.4359 |
| 3.5946 | 77.39 | 57500 | 3.2482 | 0.4379 |
| 3.5946 | 78.06 | 58000 | 3.2344 | 0.4393 |
| 3.5946 | 78.73 | 58500 | 3.2281 | 0.4407 |
| 3.5946 | 79.41 | 59000 | 3.2172 | 0.4412 |
| 3.5946 | 80.08 | 59500 | 3.2110 | 0.4420 |
| 3.4435 | 80.75 | 60000 | 3.1940 | 0.4443 |
| 3.4435 | 81.43 | 60500 | 3.1837 | 0.4455 |
| 3.4435 | 82.1 | 61000 | 3.1744 | 0.4469 |
| 3.4435 | 82.77 | 61500 | 3.1611 | 0.4483 |
| 3.4435 | 83.45 | 62000 | 3.1531 | 0.4496 |
| 3.4435 | 84.12 | 62500 | 3.1524 | 0.4499 |
| 3.4435 | 84.79 | 63000 | 3.1431 | 0.4501 |
| 3.4435 | 85.46 | 63500 | 3.1287 | 0.4527 |
| 3.4435 | 86.14 | 64000 | 3.1192 | 0.4533 |
| 3.4435 | 86.81 | 64500 | 3.1107 | 0.4547 |
| 3.3301 | 87.48 | 65000 | 3.1041 | 0.4553 |
| 3.3301 | 88.16 | 65500 | 3.0999 | 0.4562 |
| 3.3301 | 88.83 | 66000 | 3.0882 | 0.4576 |
| 3.3301 | 89.5 | 66500 | 3.0777 | 0.4589 |
| 3.3301 | 90.17 | 67000 | 3.0726 | 0.4588 |
| 3.3301 | 90.85 | 67500 | 3.0676 | 0.4601 |
| 3.3301 | 91.52 | 68000 | 3.0616 | 0.4602 |
| 3.3301 | 92.19 | 68500 | 3.0523 | 0.4621 |
| 3.3301 | 92.87 | 69000 | 3.0464 | 0.4624 |
| 3.3301 | 93.54 | 69500 | 3.0405 | 0.4635 |
| 3.2418 | 94.21 | 70000 | 3.0312 | 0.4649 |
| 3.2418 | 94.89 | 70500 | 3.0209 | 0.4653 |
| 3.2418 | 95.56 | 71000 | 3.0202 | 0.4657 |
| 3.2418 | 96.23 | 71500 | 3.0101 | 0.4676 |
| 3.2418 | 96.9 | 72000 | 3.0105 | 0.4666 |
| 3.2418 | 97.58 | 72500 | 3.0023 | 0.4685 |
| 3.2418 | 98.25 | 73000 | 3.0008 | 0.4680 |
| 3.2418 | 98.92 | 73500 | 2.9882 | 0.4691 |
| 3.2418 | 99.6 | 74000 | 2.9855 | 0.4702 |
| 3.2418 | 100.27 | 74500 | 2.9790 | 0.4709 |
| 3.1698 | 100.94 | 75000 | 2.9680 | 0.4716 |
| 3.1698 | 101.61 | 75500 | 2.9667 | 0.4724 |
| 3.1698 | 102.29 | 76000 | 2.9657 | 0.4726 |
| 3.1698 | 102.96 | 76500 | 2.9623 | 0.4731 |
| 3.1698 | 103.63 | 77000 | 2.9515 | 0.4745 |
| 3.1698 | 104.31 | 77500 | 2.9471 | 0.4753 |
| 3.1698 | 104.98 | 78000 | 2.9407 | 0.4756 |
| 3.1698 | 105.65 | 78500 | 2.9388 | 0.4761 |
| 3.1698 | 106.33 | 79000 | 2.9369 | 0.4766 |
| 3.1698 | 107.0 | 79500 | 2.9297 | 0.4762 |
| 3.1101 | 107.67 | 80000 | 2.9291 | 0.4776 |
| 3.1101 | 108.34 | 80500 | 2.9139 | 0.4788 |
| 3.1101 | 109.02 | 81000 | 2.9113 | 0.4790 |
| 3.1101 | 109.69 | 81500 | 2.9057 | 0.4798 |
| 3.1101 | 110.36 | 82000 | 2.9058 | 0.4804 |
| 3.1101 | 111.04 | 82500 | 2.9019 | 0.4807 |
| 3.1101 | 111.71 | 83000 | 2.8934 | 0.4818 |
| 3.1101 | 112.38 | 83500 | 2.8864 | 0.4825 |
| 3.1101 | 113.06 | 84000 | 2.8926 | 0.4815 |
| 3.1101 | 113.73 | 84500 | 2.8812 | 0.4830 |
| 3.058 | 114.4 | 85000 | 2.8740 | 0.4840 |
| 3.058 | 115.07 | 85500 | 2.8730 | 0.4840 |
| 3.058 | 115.75 | 86000 | 2.8694 | 0.4847 |
| 3.058 | 116.42 | 86500 | 2.8694 | 0.4848 |
| 3.058 | 117.09 | 87000 | 2.8601 | 0.4862 |
| 3.058 | 117.77 | 87500 | 2.8547 | 0.4862 |
| 3.058 | 118.44 | 88000 | 2.8538 | 0.4861 |
| 3.058 | 119.11 | 88500 | 2.8494 | 0.4876 |
| 3.058 | 119.78 | 89000 | 2.8430 | 0.4882 |
| 3.058 | 120.46 | 89500 | 2.8436 | 0.4875 |
| 3.0129 | 121.13 | 90000 | 2.8402 | 0.4884 |
| 3.0129 | 121.8 | 90500 | 2.8353 | 0.4888 |
| 3.0129 | 122.48 | 91000 | 2.8271 | 0.4896 |
| 3.0129 | 123.15 | 91500 | 2.8236 | 0.4900 |
| 3.0129 | 123.82 | 92000 | 2.8199 | 0.4913 |
| 3.0129 | 124.5 | 92500 | 2.8119 | 0.4916 |
| 3.0129 | 125.17 | 93000 | 2.8138 | 0.4916 |
| 3.0129 | 125.84 | 93500 | 2.8089 | 0.4925 |
| 3.0129 | 126.51 | 94000 | 2.8067 | 0.4928 |
| 3.0129 | 127.19 | 94500 | 2.8010 | 0.4939 |
| 2.9701 | 127.86 | 95000 | 2.7992 | 0.4938 |
| 2.9701 | 128.53 | 95500 | 2.7953 | 0.4948 |
| 2.9701 | 129.21 | 96000 | 2.7964 | 0.4942 |
| 2.9701 | 129.88 | 96500 | 2.7838 | 0.4955 |
| 2.9701 | 130.55 | 97000 | 2.7770 | 0.4968 |
| 2.9701 | 131.22 | 97500 | 2.7800 | 0.4962 |
| 2.9701 | 131.9 | 98000 | 2.7743 | 0.4972 |
| 2.9701 | 132.57 | 98500 | 2.7696 | 0.4973 |
| 2.9701 | 133.24 | 99000 | 2.7691 | 0.4980 |
| 2.9701 | 133.92 | 99500 | 2.7612 | 0.4989 |
| 2.9289 | 134.59 | 100000 | 2.7606 | 0.4987 |
| 2.9289 | 135.26 | 100500 | 2.7545 | 0.4993 |
| 2.9289 | 135.94 | 101000 | 2.7544 | 0.4999 |
| 2.9289 | 136.61 | 101500 | 2.7550 | 0.4999 |
| 2.9289 | 137.28 | 102000 | 2.7510 | 0.5001 |
| 2.9289 | 137.95 | 102500 | 2.7480 | 0.5002 |
| 2.9289 | 138.63 | 103000 | 2.7422 | 0.5012 |
| 2.9289 | 139.3 | 103500 | 2.7419 | 0.5014 |
| 2.9289 | 139.97 | 104000 | 2.7416 | 0.5009 |
| 2.9289 | 140.65 | 104500 | 2.7412 | 0.5017 |
| 2.8968 | 141.32 | 105000 | 2.7356 | 0.5023 |
| 2.8968 | 141.99 | 105500 | 2.7303 | 0.5027 |
| 2.8968 | 142.66 | 106000 | 2.7359 | 0.5029 |
| 2.8968 | 143.34 | 106500 | 2.7283 | 0.5032 |
| 2.8968 | 144.01 | 107000 | 2.7226 | 0.5033 |
| 2.8968 | 144.68 | 107500 | 2.7247 | 0.5039 |
| 2.8968 | 145.36 | 108000 | 2.7209 | 0.5044 |
| 2.8968 | 146.03 | 108500 | 2.7210 | 0.5039 |
| 2.8968 | 146.7 | 109000 | 2.7135 | 0.5046 |
| 2.8968 | 147.38 | 109500 | 2.7139 | 0.5048 |
| 2.8697 | 148.05 | 110000 | 2.7167 | 0.5050 |
| 2.8697 | 148.72 | 110500 | 2.7125 | 0.5058 |
| 2.8697 | 149.39 | 111000 | 2.7064 | 0.5060 |
| 2.8697 | 150.07 | 111500 | 2.7024 | 0.5067 |
| 2.8697 | 150.74 | 112000 | 2.7035 | 0.5067 |
| 2.8697 | 151.41 | 112500 | 2.7034 | 0.5067 |
| 2.8697 | 152.09 | 113000 | 2.6967 | 0.5073 |
| 2.8697 | 152.76 | 113500 | 2.6982 | 0.5070 |
| 2.8697 | 153.43 | 114000 | 2.6948 | 0.5079 |
| 2.8697 | 154.1 | 114500 | 2.6946 | 0.5076 |
| 2.8457 | 154.78 | 115000 | 2.6918 | 0.5078 |
| 2.8457 | 155.45 | 115500 | 2.6917 | 0.5078 |
| 2.8457 | 156.12 | 116000 | 2.6868 | 0.5086 |
| 2.8457 | 156.8 | 116500 | 2.6870 | 0.5084 |
| 2.8457 | 157.47 | 117000 | 2.6830 | 0.5091 |
| 2.8457 | 158.14 | 117500 | 2.6824 | 0.5090 |
| 2.8457 | 158.82 | 118000 | 2.6812 | 0.5092 |
| 2.8457 | 159.49 | 118500 | 2.6747 | 0.5098 |
| 2.8457 | 160.16 | 119000 | 2.6747 | 0.5105 |
| 2.8457 | 160.83 | 119500 | 2.6750 | 0.5102 |
| 2.825 | 161.51 | 120000 | 2.6761 | 0.5102 |
| 2.825 | 162.18 | 120500 | 2.6670 | 0.5115 |
| 2.825 | 162.85 | 121000 | 2.6740 | 0.5104 |
| 2.825 | 163.53 | 121500 | 2.6700 | 0.5106 |
| 2.825 | 164.2 | 122000 | 2.6629 | 0.5119 |
| 2.825 | 164.87 | 122500 | 2.6642 | 0.5117 |
| 2.825 | 165.54 | 123000 | 2.6664 | 0.5109 |
| 2.825 | 166.22 | 123500 | 2.6626 | 0.5117 |
| 2.825 | 166.89 | 124000 | 2.6561 | 0.5130 |
| 2.825 | 167.56 | 124500 | 2.6612 | 0.5125 |
| 2.8059 | 168.24 | 125000 | 2.6594 | 0.5123 |
| 2.8059 | 168.91 | 125500 | 2.6508 | 0.5132 |
| 2.8059 | 169.58 | 126000 | 2.6477 | 0.5134 |
| 2.8059 | 170.26 | 126500 | 2.6527 | 0.5133 |
| 2.8059 | 170.93 | 127000 | 2.6519 | 0.5136 |
| 2.8059 | 171.6 | 127500 | 2.6456 | 0.5141 |
| 2.8059 | 172.27 | 128000 | 2.6473 | 0.5139 |
| 2.8059 | 172.95 | 128500 | 2.6426 | 0.5144 |
| 2.8059 | 173.62 | 129000 | 2.6454 | 0.5137 |
| 2.8059 | 174.29 | 129500 | 2.6427 | 0.5147 |
| 2.788 | 174.97 | 130000 | 2.6376 | 0.5150 |
| 2.788 | 175.64 | 130500 | 2.6366 | 0.5154 |
| 2.788 | 176.31 | 131000 | 2.6327 | 0.5156 |
| 2.788 | 176.98 | 131500 | 2.6328 | 0.5157 |
| 2.788 | 177.66 | 132000 | 2.6335 | 0.5156 |
| 2.788 | 178.33 | 132500 | 2.6302 | 0.5166 |
| 2.788 | 179.0 | 133000 | 2.6333 | 0.5160 |
| 2.788 | 179.68 | 133500 | 2.6253 | 0.5171 |
| 2.788 | 180.35 | 134000 | 2.6237 | 0.5167 |
| 2.788 | 181.02 | 134500 | 2.6269 | 0.5165 |
| 2.7723 | 181.7 | 135000 | 2.6283 | 0.5164 |
| 2.7723 | 182.37 | 135500 | 2.6255 | 0.5174 |
| 2.7723 | 183.04 | 136000 | 2.6254 | 0.5175 |
| 2.7723 | 183.71 | 136500 | 2.6231 | 0.5172 |
| 2.7723 | 184.39 | 137000 | 2.6181 | 0.5173 |
| 2.7723 | 185.06 | 137500 | 2.6260 | 0.5168 |
| 2.7723 | 185.73 | 138000 | 2.6183 | 0.5176 |
| 2.7723 | 186.41 | 138500 | 2.6174 | 0.5182 |
| 2.7723 | 187.08 | 139000 | 2.6144 | 0.5182 |
| 2.7723 | 187.75 | 139500 | 2.6152 | 0.5186 |
| 2.7575 | 188.43 | 140000 | 2.6150 | 0.5183 |
| 2.7575 | 189.1 | 140500 | 2.6110 | 0.5190 |
| 2.7575 | 189.77 | 141000 | 2.6044 | 0.5194 |
| 2.7575 | 190.44 | 141500 | 2.6083 | 0.5186 |
| 2.7575 | 191.12 | 142000 | 2.6102 | 0.5189 |
| 2.7575 | 191.79 | 142500 | 2.6063 | 0.5195 |
| 2.7575 | 192.46 | 143000 | 2.6071 | 0.5198 |
| 2.7575 | 193.14 | 143500 | 2.6090 | 0.5191 |
| 2.7575 | 193.81 | 144000 | 2.6068 | 0.5200 |
| 2.7575 | 194.48 | 144500 | 2.6032 | 0.5200 |
| 2.7445 | 195.15 | 145000 | 2.6031 | 0.5200 |
| 2.7445 | 195.83 | 145500 | 2.6007 | 0.5201 |
| 2.7445 | 196.5 | 146000 | 2.5998 | 0.5203 |
| 2.7445 | 197.17 | 146500 | 2.5980 | 0.5208 |
| 2.7445 | 197.85 | 147000 | 2.5952 | 0.5211 |
| 2.7445 | 198.52 | 147500 | 2.5977 | 0.5210 |
| 2.7445 | 199.19 | 148000 | 2.5922 | 0.5212 |
| 2.7445 | 199.87 | 148500 | 2.5936 | 0.5211 |
| 2.7445 | 200.54 | 149000 | 2.5933 | 0.5219 |
| 2.7445 | 201.21 | 149500 | 2.5887 | 0.5219 |
| 2.7324 | 201.88 | 150000 | 2.5911 | 0.5215 |
| 2.7324 | 202.56 | 150500 | 2.5892 | 0.5219 |
| 2.7324 | 203.23 | 151000 | 2.5875 | 0.5218 |
| 2.7324 | 203.9 | 151500 | 2.5898 | 0.5220 |
| 2.7324 | 204.58 | 152000 | 2.5872 | 0.5223 |
| 2.7324 | 205.25 | 152500 | 2.5805 | 0.5226 |
| 2.7324 | 205.92 | 153000 | 2.5861 | 0.5225 |
| 2.7324 | 206.59 | 153500 | 2.5839 | 0.5223 |
| 2.7324 | 207.27 | 154000 | 2.5804 | 0.5232 |
| 2.7324 | 207.94 | 154500 | 2.5766 | 0.5235 |
| 2.7212 | 208.61 | 155000 | 2.5764 | 0.5233 |
| 2.7212 | 209.29 | 155500 | 2.5801 | 0.5233 |
| 2.7212 | 209.96 | 156000 | 2.5737 | 0.5241 |
| 2.7212 | 210.63 | 156500 | 2.5769 | 0.5236 |
| 2.7212 | 211.31 | 157000 | 2.5769 | 0.5237 |
| 2.7212 | 211.98 | 157500 | 2.5748 | 0.5239 |
| 2.7212 | 212.65 | 158000 | 2.5745 | 0.5230 |
| 2.7212 | 213.32 | 158500 | 2.5725 | 0.5240 |
| 2.7212 | 214.0 | 159000 | 2.5736 | 0.5239 |
| 2.7212 | 214.67 | 159500 | 2.5675 | 0.5252 |
| 2.7103 | 215.34 | 160000 | 2.5678 | 0.5245 |
| 2.7103 | 216.02 | 160500 | 2.5691 | 0.5250 |
| 2.7103 | 216.69 | 161000 | 2.5688 | 0.5245 |
| 2.7103 | 217.36 | 161500 | 2.5681 | 0.5251 |
| 2.7103 | 218.03 | 162000 | 2.5582 | 0.5255 |
| 2.7103 | 218.71 | 162500 | 2.5675 | 0.5247 |
| 2.7103 | 219.38 | 163000 | 2.5609 | 0.5259 |
| 2.7103 | 220.05 | 163500 | 2.5625 | 0.5254 |
| 2.7103 | 220.73 | 164000 | 2.5630 | 0.5254 |
| 2.7103 | 221.4 | 164500 | 2.5607 | 0.5265 |
| 2.7003 | 222.07 | 165000 | 2.5615 | 0.5260 |
| 2.7003 | 222.75 | 165500 | 2.5660 | 0.5248 |
| 2.7003 | 223.42 | 166000 | 2.5569 | 0.5263 |
| 2.7003 | 224.09 | 166500 | 2.5610 | 0.5255 |
| 2.7003 | 224.76 | 167000 | 2.5569 | 0.5263 |
| 2.7003 | 225.44 | 167500 | 2.5534 | 0.5265 |
| 2.7003 | 226.11 | 168000 | 2.5573 | 0.5259 |
| 2.7003 | 226.78 | 168500 | 2.5559 | 0.5264 |
| 2.7003 | 227.46 | 169000 | 2.5508 | 0.5277 |
| 2.7003 | 228.13 | 169500 | 2.5500 | 0.5276 |
| 2.6915 | 228.8 | 170000 | 2.5501 | 0.5270 |
| 2.6915 | 229.47 | 170500 | 2.5508 | 0.5273 |
| 2.6915 | 230.15 | 171000 | 2.5523 | 0.5267 |
| 2.6915 | 230.82 | 171500 | 2.5464 | 0.5276 |
| 2.6915 | 231.49 | 172000 | 2.5482 | 0.5271 |
| 2.6915 | 232.17 | 172500 | 2.5486 | 0.5270 |
| 2.6915 | 232.84 | 173000 | 2.5474 | 0.5275 |
| 2.6915 | 233.51 | 173500 | 2.5483 | 0.5270 |
| 2.6915 | 234.19 | 174000 | 2.5480 | 0.5276 |
| 2.6915 | 234.86 | 174500 | 2.5486 | 0.5278 |
| 2.6833 | 235.53 | 175000 | 2.5484 | 0.5273 |
| 2.6833 | 236.2 | 175500 | 2.5436 | 0.5277 |
| 2.6833 | 236.88 | 176000 | 2.5448 | 0.5278 |
| 2.6833 | 237.55 | 176500 | 2.5430 | 0.5284 |
| 2.6833 | 238.22 | 177000 | 2.5433 | 0.5279 |
| 2.6833 | 238.9 | 177500 | 2.5398 | 0.5288 |
| 2.6833 | 239.57 | 178000 | 2.5424 | 0.5282 |
| 2.6833 | 240.24 | 178500 | 2.5371 | 0.5291 |
| 2.6833 | 240.91 | 179000 | 2.5372 | 0.5294 |
| 2.6833 | 241.59 | 179500 | 2.5368 | 0.5290 |
| 2.6753 | 242.26 | 180000 | 2.5383 | 0.5289 |
| 2.6753 | 242.93 | 180500 | 2.5387 | 0.5289 |
| 2.6753 | 243.61 | 181000 | 2.5351 | 0.5295 |
| 2.6753 | 244.28 | 181500 | 2.5340 | 0.5296 |
| 2.6753 | 244.95 | 182000 | 2.5349 | 0.5289 |
| 2.6753 | 245.63 | 182500 | 2.5358 | 0.5295 |
| 2.6753 | 246.3 | 183000 | 2.5333 | 0.5299 |
| 2.6753 | 246.97 | 183500 | 2.5363 | 0.5292 |
| 2.6753 | 247.64 | 184000 | 2.5323 | 0.5298 |
| 2.6753 | 248.32 | 184500 | 2.5286 | 0.5299 |
| 2.6679 | 248.99 | 185000 | 2.5276 | 0.5306 |
| 2.6679 | 249.66 | 185500 | 2.5249 | 0.5308 |
| 2.6679 | 250.34 | 186000 | 2.5308 | 0.5302 |
| 2.6679 | 251.01 | 186500 | 2.5307 | 0.5297 |
| 2.6679 | 251.68 | 187000 | 2.5293 | 0.5305 |
| 2.6679 | 252.36 | 187500 | 2.5255 | 0.5306 |
| 2.6679 | 253.03 | 188000 | 2.5244 | 0.5312 |
| 2.6679 | 253.7 | 188500 | 2.5278 | 0.5305 |
| 2.6679 | 254.37 | 189000 | 2.5212 | 0.5317 |
| 2.6679 | 255.05 | 189500 | 2.5256 | 0.5307 |
| 2.6611 | 255.72 | 190000 | 2.5273 | 0.5303 |
| 2.6611 | 256.39 | 190500 | 2.5222 | 0.5310 |
| 2.6611 | 257.07 | 191000 | 2.5237 | 0.5311 |
| 2.6611 | 257.74 | 191500 | 2.5258 | 0.5309 |
| 2.6611 | 258.41 | 192000 | 2.5219 | 0.5313 |
| 2.6611 | 259.08 | 192500 | 2.5243 | 0.5314 |
| 2.6611 | 259.76 | 193000 | 2.5203 | 0.5319 |
| 2.6611 | 260.43 | 193500 | 2.5205 | 0.5313 |
| 2.6611 | 261.1 | 194000 | 2.5205 | 0.5322 |
| 2.6611 | 261.78 | 194500 | 2.5196 | 0.5317 |
| 2.655 | 262.45 | 195000 | 2.5199 | 0.5315 |
| 2.655 | 263.12 | 195500 | 2.5226 | 0.5315 |
| 2.655 | 263.8 | 196000 | 2.5175 | 0.5316 |
| 2.655 | 264.47 | 196500 | 2.5160 | 0.5322 |
| 2.655 | 265.14 | 197000 | 2.5185 | 0.5316 |
| 2.655 | 265.81 | 197500 | 2.5133 | 0.5322 |
| 2.655 | 266.49 | 198000 | 2.5163 | 0.5318 |
| 2.655 | 267.16 | 198500 | 2.5135 | 0.5325 |
| 2.655 | 267.83 | 199000 | 2.5132 | 0.5326 |
| 2.655 | 268.51 | 199500 | 2.5148 | 0.5323 |
| 2.6486 | 269.18 | 200000 | 2.5194 | 0.5317 |
| 2.6486 | 269.85 | 200500 | 2.5162 | 0.5321 |
| 2.6486 | 270.52 | 201000 | 2.5090 | 0.5332 |
| 2.6486 | 271.2 | 201500 | 2.5126 | 0.5325 |
| 2.6486 | 271.87 | 202000 | 2.5155 | 0.5320 |
| 2.6486 | 272.54 | 202500 | 2.5099 | 0.5329 |
| 2.6486 | 273.22 | 203000 | 2.5130 | 0.5325 |
| 2.6486 | 273.89 | 203500 | 2.5064 | 0.5329 |
| 2.6486 | 274.56 | 204000 | 2.5154 | 0.5319 |
| 2.6486 | 275.24 | 204500 | 2.5097 | 0.5329 |
| 2.6433 | 275.91 | 205000 | 2.5075 | 0.5334 |
| 2.6433 | 276.58 | 205500 | 2.5120 | 0.5325 |
| 2.6433 | 277.25 | 206000 | 2.5100 | 0.5329 |
| 2.6433 | 277.93 | 206500 | 2.5115 | 0.5332 |
| 2.6433 | 278.6 | 207000 | 2.5071 | 0.5332 |
| 2.6433 | 279.27 | 207500 | 2.5075 | 0.5335 |
| 2.6433 | 279.95 | 208000 | 2.5020 | 0.5338 |
| 2.6433 | 280.62 | 208500 | 2.5025 | 0.5340 |
| 2.6433 | 281.29 | 209000 | 2.5064 | 0.5333 |
| 2.6433 | 281.96 | 209500 | 2.5038 | 0.5336 |
| 2.6383 | 282.64 | 210000 | 2.5041 | 0.5340 |
| 2.6383 | 283.31 | 210500 | 2.5075 | 0.5336 |
| 2.6383 | 283.98 | 211000 | 2.5028 | 0.5333 |
| 2.6383 | 284.66 | 211500 | 2.5008 | 0.5340 |
| 2.6383 | 285.33 | 212000 | 2.5005 | 0.5345 |
| 2.6383 | 286.0 | 212500 | 2.5020 | 0.5334 |
| 2.6383 | 286.68 | 213000 | 2.5011 | 0.5344 |
| 2.6383 | 287.35 | 213500 | 2.5028 | 0.5338 |
| 2.6383 | 288.02 | 214000 | 2.4970 | 0.5340 |
| 2.6383 | 288.69 | 214500 | 2.4995 | 0.5343 |
| 2.6336 | 289.37 | 215000 | 2.5010 | 0.5343 |
| 2.6336 | 290.04 | 215500 | 2.5060 | 0.5336 |
| 2.6336 | 290.71 | 216000 | 2.4955 | 0.5347 |
| 2.6336 | 291.39 | 216500 | 2.4972 | 0.5349 |
| 2.6336 | 292.06 | 217000 | 2.4977 | 0.5349 |
| 2.6336 | 292.73 | 217500 | 2.4973 | 0.5346 |
| 2.6336 | 293.4 | 218000 | 2.4981 | 0.5346 |
| 2.6336 | 294.08 | 218500 | 2.4941 | 0.5346 |
| 2.6336 | 294.75 | 219000 | 2.4978 | 0.5350 |
| 2.6336 | 295.42 | 219500 | 2.4990 | 0.5343 |
| 2.6288 | 296.1 | 220000 | 2.4929 | 0.5347 |
| 2.6288 | 296.77 | 220500 | 2.4937 | 0.5349 |
| 2.6288 | 297.44 | 221000 | 2.4938 | 0.5349 |
| 2.6288 | 298.12 | 221500 | 2.4938 | 0.5343 |
| 2.6288 | 298.79 | 222000 | 2.4924 | 0.5354 |
| 2.6288 | 299.46 | 222500 | 2.4953 | 0.5348 |
| 2.6288 | 300.13 | 223000 | 2.4930 | 0.5347 |
| 2.6288 | 300.81 | 223500 | 2.4934 | 0.5353 |
| 2.6288 | 301.48 | 224000 | 2.4942 | 0.5348 |
| 2.6288 | 302.15 | 224500 | 2.4960 | 0.5344 |
| 2.6246 | 302.83 | 225000 | 2.4875 | 0.5357 |
| 2.6246 | 303.5 | 225500 | 2.4898 | 0.5355 |
| 2.6246 | 304.17 | 226000 | 2.4847 | 0.5366 |
| 2.6246 | 304.84 | 226500 | 2.4970 | 0.5348 |
| 2.6246 | 305.52 | 227000 | 2.4905 | 0.5356 |
| 2.6246 | 306.19 | 227500 | 2.4873 | 0.5361 |
| 2.6246 | 306.86 | 228000 | 2.4939 | 0.5350 |
| 2.6246 | 307.54 | 228500 | 2.4910 | 0.5360 |
| 2.6246 | 308.21 | 229000 | 2.4886 | 0.5355 |
| 2.6246 | 308.88 | 229500 | 2.4890 | 0.5369 |
| 2.6207 | 309.56 | 230000 | 2.4900 | 0.5360 |
| 2.6207 | 310.23 | 230500 | 2.4885 | 0.5354 |
| 2.6207 | 310.9 | 231000 | 2.4895 | 0.5358 |
| 2.6207 | 311.57 | 231500 | 2.4871 | 0.5358 |
| 2.6207 | 312.25 | 232000 | 2.4914 | 0.5352 |
| 2.6207 | 312.92 | 232500 | 2.4843 | 0.5366 |
| 2.6207 | 313.59 | 233000 | 2.4837 | 0.5365 |
| 2.6207 | 314.27 | 233500 | 2.4883 | 0.5360 |
| 2.6207 | 314.94 | 234000 | 2.4839 | 0.5366 |
| 2.6207 | 315.61 | 234500 | 2.4854 | 0.5366 |
| 2.6171 | 316.29 | 235000 | 2.4833 | 0.5367 |
| 2.6171 | 316.96 | 235500 | 2.4783 | 0.5374 |
| 2.6171 | 317.63 | 236000 | 2.4807 | 0.5370 |
| 2.6171 | 318.3 | 236500 | 2.4824 | 0.5366 |
| 2.6171 | 318.98 | 237000 | 2.4857 | 0.5361 |
| 2.6171 | 319.65 | 237500 | 2.4817 | 0.5366 |
| 2.6171 | 320.32 | 238000 | 2.4855 | 0.5364 |
| 2.6171 | 321.0 | 238500 | 2.4834 | 0.5367 |
| 2.6171 | 321.67 | 239000 | 2.4831 | 0.5363 |
| 2.6171 | 322.34 | 239500 | 2.4806 | 0.5370 |
| 2.6134 | 323.01 | 240000 | 2.4842 | 0.5365 |
| 2.6134 | 323.69 | 240500 | 2.4830 | 0.5364 |
| 2.6134 | 324.36 | 241000 | 2.4822 | 0.5367 |
| 2.6134 | 325.03 | 241500 | 2.4805 | 0.5373 |
| 2.6134 | 325.71 | 242000 | 2.4838 | 0.5365 |
| 2.6134 | 326.38 | 242500 | 2.4776 | 0.5371 |
| 2.6134 | 327.05 | 243000 | 2.4786 | 0.5376 |
| 2.6134 | 327.73 | 243500 | 2.4824 | 0.5371 |
| 2.6134 | 328.4 | 244000 | 2.4842 | 0.5363 |
| 2.6134 | 329.07 | 244500 | 2.4790 | 0.5375 |
| 2.6107 | 329.74 | 245000 | 2.4770 | 0.5372 |
| 2.6107 | 330.42 | 245500 | 2.4805 | 0.5375 |
| 2.6107 | 331.09 | 246000 | 2.4839 | 0.5370 |
| 2.6107 | 331.76 | 246500 | 2.4802 | 0.5367 |
| 2.6107 | 332.44 | 247000 | 2.4796 | 0.5373 |
| 2.6107 | 333.11 | 247500 | 2.4736 | 0.5377 |
| 2.6107 | 333.78 | 248000 | 2.4789 | 0.5374 |
| 2.6107 | 334.45 | 248500 | 2.4761 | 0.5375 |
| 2.6107 | 335.13 | 249000 | 2.4728 | 0.5379 |
| 2.6107 | 335.8 | 249500 | 2.4702 | 0.5386 |
| 2.608 | 336.47 | 250000 | 2.4764 | 0.5377 |
| 2.608 | 337.15 | 250500 | 2.4738 | 0.5380 |
| 2.608 | 337.82 | 251000 | 2.4795 | 0.5371 |
| 2.608 | 338.49 | 251500 | 2.4702 | 0.5387 |
| 2.608 | 339.17 | 252000 | 2.4823 | 0.5369 |
| 2.608 | 339.84 | 252500 | 2.4741 | 0.5382 |
| 2.608 | 340.51 | 253000 | 2.4718 | 0.5382 |
| 2.608 | 341.18 | 253500 | 2.4731 | 0.5378 |
| 2.608 | 341.86 | 254000 | 2.4780 | 0.5373 |
| 2.608 | 342.53 | 254500 | 2.4706 | 0.5388 |
| 2.6058 | 343.2 | 255000 | 2.4707 | 0.5386 |
| 2.6058 | 343.88 | 255500 | 2.4725 | 0.5380 |
| 2.6058 | 344.55 | 256000 | 2.4744 | 0.5382 |
| 2.6058 | 345.22 | 256500 | 2.4766 | 0.5374 |
| 2.6058 | 345.89 | 257000 | 2.4736 | 0.5378 |
| 2.6058 | 346.57 | 257500 | 2.4731 | 0.5383 |
| 2.6058 | 347.24 | 258000 | 2.4754 | 0.5377 |
| 2.6058 | 347.91 | 258500 | 2.4749 | 0.5382 |
| 2.6058 | 348.59 | 259000 | 2.4735 | 0.5378 |
| 2.6058 | 349.26 | 259500 | 2.4716 | 0.5384 |
| 2.6027 | 349.93 | 260000 | 2.4726 | 0.5378 |
| 2.6027 | 350.61 | 260500 | 2.4733 | 0.5378 |
| 2.6027 | 351.28 | 261000 | 2.4698 | 0.5386 |
| 2.6027 | 351.95 | 261500 | 2.4702 | 0.5388 |
| 2.6027 | 352.62 | 262000 | 2.4673 | 0.5390 |
| 2.6027 | 353.3 | 262500 | 2.4683 | 0.5390 |
| 2.6027 | 353.97 | 263000 | 2.4739 | 0.5379 |
| 2.6027 | 354.64 | 263500 | 2.4743 | 0.5382 |
| 2.6027 | 355.32 | 264000 | 2.4694 | 0.5388 |
| 2.6027 | 355.99 | 264500 | 2.4671 | 0.5391 |
| 2.6009 | 356.66 | 265000 | 2.4747 | 0.5383 |
| 2.6009 | 357.34 | 265500 | 2.4703 | 0.5382 |
| 2.6009 | 358.01 | 266000 | 2.4695 | 0.5388 |
| 2.6009 | 358.68 | 266500 | 2.4725 | 0.5380 |
| 2.6009 | 359.35 | 267000 | 2.4639 | 0.5397 |
| 2.6009 | 360.03 | 267500 | 2.4686 | 0.5385 |
| 2.6009 | 360.7 | 268000 | 2.4698 | 0.5386 |
| 2.6009 | 361.37 | 268500 | 2.4694 | 0.5386 |
| 2.6009 | 362.05 | 269000 | 2.4680 | 0.5390 |
| 2.6009 | 362.72 | 269500 | 2.4728 | 0.5383 |
| 2.5989 | 363.39 | 270000 | 2.4697 | 0.5385 |
| 2.5989 | 364.06 | 270500 | 2.4701 | 0.5387 |
| 2.5989 | 364.74 | 271000 | 2.4702 | 0.5387 |
| 2.5989 | 365.41 | 271500 | 2.4687 | 0.5390 |
| 2.5989 | 366.08 | 272000 | 2.4725 | 0.5382 |
| 2.5989 | 366.76 | 272500 | 2.4673 | 0.5384 |
| 2.5989 | 367.43 | 273000 | 2.4659 | 0.5390 |
| 2.5989 | 368.1 | 273500 | 2.4686 | 0.5389 |
| 2.5989 | 368.78 | 274000 | 2.4677 | 0.5382 |
| 2.5989 | 369.45 | 274500 | 2.4632 | 0.5389 |
| 2.5977 | 370.12 | 275000 | 2.4676 | 0.5385 |
| 2.5977 | 370.79 | 275500 | 2.4699 | 0.5388 |
| 2.5977 | 371.47 | 276000 | 2.4629 | 0.5394 |
| 2.5977 | 372.14 | 276500 | 2.4720 | 0.5380 |
| 2.5977 | 372.81 | 277000 | 2.4678 | 0.5391 |
| 2.5977 | 373.49 | 277500 | 2.4643 | 0.5396 |
| 2.5977 | 374.16 | 278000 | 2.4654 | 0.5395 |
| 2.5977 | 374.83 | 278500 | 2.4645 | 0.5395 |
| 2.5977 | 375.5 | 279000 | 2.4649 | 0.5391 |
| 2.5977 | 376.18 | 279500 | 2.4639 | 0.5392 |
| 2.5961 | 376.85 | 280000 | 2.4659 | 0.5389 |
| 2.5961 | 377.52 | 280500 | 2.4681 | 0.5385 |
| 2.5961 | 378.2 | 281000 | 2.4641 | 0.5390 |
| 2.5961 | 378.87 | 281500 | 2.4658 | 0.5393 |
| 2.5961 | 379.54 | 282000 | 2.4687 | 0.5388 |
| 2.5961 | 380.22 | 282500 | 2.4690 | 0.5385 |
| 2.5961 | 380.89 | 283000 | 2.4679 | 0.5391 |
| 2.5961 | 381.56 | 283500 | 2.4612 | 0.5395 |
| 2.5961 | 382.23 | 284000 | 2.4624 | 0.5395 |
| 2.5961 | 382.91 | 284500 | 2.4668 | 0.5390 |
| 2.5947 | 383.58 | 285000 | 2.4663 | 0.5389 |
| 2.5947 | 384.25 | 285500 | 2.4654 | 0.5387 |
| 2.5947 | 384.93 | 286000 | 2.4708 | 0.5385 |
| 2.5947 | 385.6 | 286500 | 2.4669 | 0.5388 |
| 2.5947 | 386.27 | 287000 | 2.4612 | 0.5396 |
| 2.5947 | 386.94 | 287500 | 2.4666 | 0.5392 |
| 2.5947 | 387.62 | 288000 | 2.4653 | 0.5393 |
| 2.5947 | 388.29 | 288500 | 2.4666 | 0.5390 |
| 2.5947 | 388.96 | 289000 | 2.4684 | 0.5388 |
| 2.5947 | 389.64 | 289500 | 2.4660 | 0.5394 |
| 2.5936 | 390.31 | 290000 | 2.4642 | 0.5395 |
| 2.5936 | 390.98 | 290500 | 2.4627 | 0.5403 |
| 2.5936 | 391.66 | 291000 | 2.4683 | 0.5389 |
| 2.5936 | 392.33 | 291500 | 2.4667 | 0.5387 |
| 2.5936 | 393.0 | 292000 | 2.4660 | 0.5389 |
| 2.5936 | 393.67 | 292500 | 2.4673 | 0.5390 |
| 2.5936 | 394.35 | 293000 | 2.4645 | 0.5391 |
| 2.5936 | 395.02 | 293500 | 2.4693 | 0.5389 |
| 2.5936 | 395.69 | 294000 | 2.4692 | 0.5385 |
| 2.5936 | 396.37 | 294500 | 2.4653 | 0.5385 |
| 2.5934 | 397.04 | 295000 | 2.4661 | 0.5390 |
| 2.5934 | 397.71 | 295500 | 2.4630 | 0.5394 |
| 2.5934 | 398.38 | 296000 | 2.4641 | 0.5390 |
| 2.5934 | 399.06 | 296500 | 2.4636 | 0.5392 |
| 2.5934 | 399.73 | 297000 | 2.4650 | 0.5392 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
### Additional configurations
```
data:
dataset_name: cc100
lang: nl
overwrite_cache: False
validation_split_percentage: 5
max_seq_length: 512
preprocessing_num_workers: 8
mlm_probability: 0.15
line_by_line: False
pad_to_max_length: False
max_train_samples: -1
max_eval_samples: -1
training:
do_train: True
do_eval: True
do_predict: True
resume_from_checkpoint: False
evaluation_strategy: steps
eval_steps: 500
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
gradient_accumulation_steps: 32
eval_accumulation_steps: 1
learning_rate: 5e-5
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
max_grad_norm: 1.0
num_train_epochs: 400.0
lr_scheduler_type: linear
fp16: False
warmup_steps: 8000
seed: 703
``` |
kyoumiaoi/wav2vec2-base-timit-demo-google-colab | kyoumiaoi | 2022-08-02T08:28:06Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-08-02T06:15:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5499
- Wer: 0.3435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.599 | 1.0 | 500 | 2.1267 | 0.9976 |
| 1.016 | 2.01 | 1000 | 0.6193 | 0.5443 |
| 0.5299 | 3.01 | 1500 | 0.5324 | 0.4889 |
| 0.3626 | 4.02 | 2000 | 0.4525 | 0.4402 |
| 0.2854 | 5.02 | 2500 | 0.4266 | 0.4233 |
| 0.2373 | 6.02 | 3000 | 0.4713 | 0.4082 |
| 0.1979 | 7.03 | 3500 | 0.4778 | 0.4018 |
| 0.1761 | 8.03 | 4000 | 0.4585 | 0.3947 |
| 0.1537 | 9.04 | 4500 | 0.5297 | 0.3946 |
| 0.1379 | 10.04 | 5000 | 0.4988 | 0.3856 |
| 0.124 | 11.04 | 5500 | 0.5262 | 0.3852 |
| 0.11 | 12.05 | 6000 | 0.5545 | 0.3854 |
| 0.106 | 13.05 | 6500 | 0.5196 | 0.3805 |
| 0.0918 | 14.06 | 7000 | 0.4515 | 0.3655 |
| 0.0829 | 15.06 | 7500 | 0.5087 | 0.3722 |
| 0.0775 | 16.06 | 8000 | 0.4980 | 0.3781 |
| 0.0685 | 17.07 | 8500 | 0.5564 | 0.3650 |
| 0.0655 | 18.07 | 9000 | 0.5323 | 0.3672 |
| 0.0578 | 19.08 | 9500 | 0.5675 | 0.3637 |
| 0.052 | 20.08 | 10000 | 0.5604 | 0.3664 |
| 0.0512 | 21.08 | 10500 | 0.5922 | 0.3804 |
| 0.0431 | 22.09 | 11000 | 0.6379 | 0.3754 |
| 0.0428 | 23.09 | 11500 | 0.5905 | 0.3764 |
| 0.0393 | 24.1 | 12000 | 0.5667 | 0.3542 |
| 0.0326 | 25.1 | 12500 | 0.5612 | 0.3537 |
| 0.0289 | 26.1 | 13000 | 0.5618 | 0.3475 |
| 0.0298 | 27.11 | 13500 | 0.5578 | 0.3439 |
| 0.0264 | 28.11 | 14000 | 0.5547 | 0.3433 |
| 0.026 | 29.12 | 14500 | 0.5499 | 0.3435 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
psroy/wav2vec2-base-timit-demo-google-colab | psroy | 2022-08-02T07:12:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-29T04:40:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5366
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5499 | 2.01 | 500 | 1.9780 | 0.9933 |
| 0.7517 | 4.02 | 1000 | 0.4654 | 0.4720 |
| 0.2953 | 6.02 | 1500 | 0.4202 | 0.4049 |
| 0.1809 | 8.03 | 2000 | 0.4276 | 0.3759 |
| 0.1335 | 10.04 | 2500 | 0.4458 | 0.3774 |
| 0.107 | 12.05 | 3000 | 0.4559 | 0.3707 |
| 0.0923 | 14.06 | 3500 | 0.4607 | 0.3659 |
| 0.0753 | 16.06 | 4000 | 0.4699 | 0.3531 |
| 0.0658 | 18.07 | 4500 | 0.4507 | 0.3588 |
| 0.0569 | 20.08 | 5000 | 0.5089 | 0.3532 |
| 0.0493 | 22.09 | 5500 | 0.5481 | 0.3515 |
| 0.043 | 24.1 | 6000 | 0.5066 | 0.3528 |
| 0.0388 | 26.1 | 6500 | 0.5418 | 0.3534 |
| 0.034 | 28.11 | 7000 | 0.5566 | 0.3524 |
| 0.03 | 30.12 | 7500 | 0.4994 | 0.3437 |
| 0.0274 | 32.13 | 8000 | 0.5588 | 0.3520 |
| 0.0239 | 34.14 | 8500 | 0.5328 | 0.3458 |
| 0.0212 | 36.14 | 9000 | 0.5221 | 0.3467 |
| 0.0186 | 38.15 | 9500 | 0.5366 | 0.3452 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
JAlexis/bert001 | JAlexis | 2022-08-02T02:59:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-08-02T01:35:56Z | ---
language: en
#epoch 7
#batch size 14
#lr 5e-5
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
}
nlp(inputs)
```
|
JAlexis/PruebaBert | JAlexis | 2022-08-02T01:46:49Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad2",
"dataset:cord19",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:04Z | ---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
- cord19
metrics:
- EM (exact match)
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19)."
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
'question': 'How can I protect myself against covid-19?',
'context': ' ',
}
nlp(inputs)
```
## Overview
```
Language model: deepset/bert-base-cased-squad2
Language: English
Downstream-task: Q&A
Datasets: CORD-19 from 31rd January 2022
Code: Haystack and FARM
Infrastructure: Tesla T4
```
## Hyperparameters
```
batch_size = 8
n_epochs = 9
max_seq_len = max_length
learning_rate = AdamW: 1e-5
```
|
rdruce/ddpm-cheese-32 | rdruce | 2022-08-02T00:34:19Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-08-02T00:05:54Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-cheese-32
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-cheese-32/tensorboard?#scalars)
|
muhtasham/bert-tiny-finetuned-xglue-ner | muhtasham | 2022-08-01T23:20:07Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:xglue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-01T23:13:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xglue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-tiny-finetuned-xglue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xglue
type: xglue
config: ner
split: train
args: ner
metrics:
- name: Precision
type: precision
value: 0.630759453447728
- name: Recall
type: recall
value: 0.6681252103668799
- name: F1
type: f1
value: 0.6489048708728343
- name: Accuracy
type: accuracy
value: 0.9274310133922189
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-xglue-ner
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the xglue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2489
- Precision: 0.6308
- Recall: 0.6681
- F1: 0.6489
- Accuracy: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4082 | 1.0 | 1756 | 0.3326 | 0.5600 | 0.5798 | 0.5697 | 0.9118 |
| 0.2974 | 2.0 | 3512 | 0.2635 | 0.6143 | 0.6562 | 0.6346 | 0.9248 |
| 0.2741 | 3.0 | 5268 | 0.2489 | 0.6308 | 0.6681 | 0.6489 | 0.9274 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/mal-tls-mobilebert | SharpAI | 2022-08-01T22:53:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"mobilebert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T22:45:11Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-mobilebert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-mobilebert
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cansen88/turkishReviews_5_topic | cansen88 | 2022-08-01T22:13:12Z | 4 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-08-01T21:21:12Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews_5_topic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews_5_topic
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.8939
- Validation Loss: 6.8949
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 756, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.0049 | 6.8949 | 0 |
| 6.8943 | 6.8949 | 1 |
| 6.8939 | 6.8949 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Intel/bert-large-uncased-squadv1.1-sparse-80-1x4-block-pruneofa | Intel | 2022-08-01T21:04:22Z | 75 | 1 | transformers | [
"transformers",
"pytorch",
"onnx",
"bert",
"question-answering",
"en",
"arxiv:2111.05754",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-27T20:17:27Z | ---
language: en
license: apache-2.0
---
# 80% 1x4 Block Sparse BERT-Large (uncased) Fine Tuned on SQuADv1.1
This model is a result of fine-tuning a Prune OFA 80% 1x4 block sparse pre-trained BERT-Large combined with knowledge distillation.
This model yields the following results on SQuADv1.1 development set:<br>
`{"exact_match": 84.673, "f1": 91.174}`
For further details see our paper, [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754), and our open source implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
|
mrm8488/pyramidsrnd | mrm8488 | 2022-08-01T20:36:43Z | 9 | 1 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-08-01T20:36:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mrm8488/pyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
muhtasham/bert-tiny-finetuned-pile-of-law-tos | muhtasham | 2022-08-01T20:24:25Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-01T18:22:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-tiny-finetuned-pile-of-law-tos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-pile-of-law-tos
This model is a MLM fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the [pile-of-law/tos](https://huggingface.co/datasets/pile-of-law/pile-of-law) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 264 | 3.5896 |
| 3.8119 | 2.0 | 528 | 3.5598 |
| 3.8119 | 3.0 | 792 | 3.5263 |
| 3.7028 | 4.0 | 1056 | 3.4982 |
| 3.7028 | 5.0 | 1320 | 3.5170 |
| 3.6286 | 6.0 | 1584 | 3.5143 |
| 3.6286 | 7.0 | 1848 | 3.4477 |
| 3.553 | 8.0 | 2112 | 3.4044 |
| 3.553 | 9.0 | 2376 | 3.4670 |
| 3.5179 | 10.0 | 2640 | 3.3991 |
| 3.5179 | 11.0 | 2904 | 3.4330 |
| 3.4784 | 12.0 | 3168 | 3.4671 |
| 3.4784 | 13.0 | 3432 | 3.3489 |
| 3.4535 | 14.0 | 3696 | 3.4354 |
| 3.4535 | 15.0 | 3960 | 3.4023 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/mal-tls-bert-base-relu-w8a8 | SharpAI | 2022-08-01T20:23:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T20:22:51Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-bert-base-relu-w8a8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-bert-base-relu-w8a8
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.10.3
|
arize-ai/resnet-50-fashion-mnist-quality-drift | arize-ai | 2022-08-01T19:55:57Z | 182 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:fashion_mnist_quality_drift",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-08-01T19:32:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fashion_mnist_quality_drift
metrics:
- accuracy
- f1
model-index:
- name: resnet-50-fashion-mnist-quality-drift
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: fashion_mnist_quality_drift
type: fashion_mnist_quality_drift
config: default
split: training
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.73
- name: F1
type: f1
value: 0.7289360255705818
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-fashion-mnist-quality-drift
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the fashion_mnist_quality_drift dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7473
- Accuracy: 0.73
- F1: 0.7289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5138 | 1.0 | 750 | 0.9237 | 0.684 | 0.6826 |
| 0.9377 | 2.0 | 1500 | 0.7861 | 0.722 | 0.7253 |
| 0.8276 | 3.0 | 2250 | 0.7473 | 0.73 | 0.7289 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/mal-tls-bert-base-relu | SharpAI | 2022-08-01T19:54:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T19:53:07Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mal_tls-bert-base-relu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal_tls-bert-base-relu
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es | mrm8488 | 2022-08-01T19:41:40Z | 1,199 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"es",
"dataset:stsb_multi_mt",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
language: es
thumbnail: https://imgur.com/a/G77ZqQN
pipeline_tag: sentence-similarity
datasets:
- stsb_multi_mt
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Distiluse-m-v2 fine-tuned on stsb_multi_mt for Spanish Semantic Textual Similarity
This is a [sentence-transformers](https://www.SBERT.net) model (distiluse-base-multilingual-cased-v2): It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Nerea va a comprar un cuadro usando bitcoins", "Se puede comprar arte con bitcoins"]
model = SentenceTransformer('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Nerea va a comprar un cuadro usando bitcoins", "Se puede comprar arte con bitcoins"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
model = AutoModel.from_pretrained('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## How to evaluate
```py
from datasets import load_dataset
from sentence_transformers import SentenceTransformer, InputExample
from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator
test_data = load_dataset('stsb_multi_mt', 'es', split='test')
test_data = test_data.rename_columns({'similarity_score': 'label'})
test_data = test_data.map(lambda x: {'label': x['label'] / 5.0})
samples = []
for sample in test_data:
samples.append(InputExample(
texts=[sample['sentence1'], sample['sentence2']],
label=sample['label']
))
evaluator = EmbeddingSimilarityEvaluator.from_input_examples(
samples, write_csv=False
)
model = SentenceTransformer('mrm8488/distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es')
evaluator(model)
# It outputs: 0.7604056195656299
```
## Evaluation Results
**Spearman’s rank correlation: 0.7604056195656299**
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 906 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 271,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
vidyavenkappa/pegasus-samsum | vidyavenkappa | 2022-08-01T18:30:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-30T12:10:24Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6151 | 0.54 | 500 | 1.4238 |
| 1.3357 | 1.09 | 1000 | 1.3629 |
| 1.4423 | 1.63 | 1500 | 1.3380 |
| 1.3747 | 2.17 | 2000 | 1.3218 |
| 1.3397 | 2.72 | 2500 | 1.3124 |
| 1.2706 | 3.26 | 3000 | 1.3149 |
| 1.1849 | 3.8 | 3500 | 1.3120 |
| 1.2222 | 4.35 | 4000 | 1.3120 |
| 1.2339 | 4.89 | 4500 | 1.3086 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
turhancan97/Reinforce-2 | turhancan97 | 2022-08-01T16:45:20Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-01T16:44:23Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- metrics:
- type: mean_reward
value: 9.40 +/- 13.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
silviacamplani/twitter-roberta-base-finetuned-ner-wnut | silviacamplani | 2022-08-01T16:26:39Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"roberta",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-01T15:50:19Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/twitter-roberta-base-finetuned-ner-wnut
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/twitter-roberta-base-finetuned-ner-wnut
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0812
- Validation Loss: 0.2553
- Train Precision: 0.6263
- Train Recall: 0.5191
- Train F1: 0.5677
- Train Accuracy: 0.9398
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.0813 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 0 |
| 0.0815 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 1 |
| 0.0812 | 0.2553 | 0.6263 | 0.5191 | 0.5677 | 0.9398 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
thocheat/v2-fine-tune-wav2vec2-Vietnamese-ARS-demo | thocheat | 2022-08-01T16:01:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-08-01T14:23:01Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: v2-fine-tune-wav2vec2-Vietnamese-ARS-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v2-fine-tune-wav2vec2-Vietnamese-ARS-demo
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2515
- Wer: 0.2235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8651 | 0.34 | 500 | 3.6919 | 0.9999 |
| 3.54 | 0.69 | 1000 | 3.3584 | 1.0 |
| 2.9478 | 1.03 | 1500 | 2.2535 | 0.9885 |
| 1.9147 | 1.37 | 2000 | 0.9977 | 0.7260 |
| 1.1667 | 1.71 | 2500 | 0.5577 | 0.4746 |
| 0.844 | 2.06 | 3000 | 0.4129 | 0.3581 |
| 0.6968 | 2.4 | 3500 | 0.3566 | 0.3090 |
| 0.6273 | 2.74 | 4000 | 0.3243 | 0.2813 |
| 0.5434 | 3.09 | 4500 | 0.3076 | 0.2631 |
| 0.5069 | 3.43 | 5000 | 0.2902 | 0.2539 |
| 0.4842 | 3.77 | 5500 | 0.2752 | 0.2432 |
| 0.4318 | 4.12 | 6000 | 0.2854 | 0.2384 |
| 0.3951 | 4.46 | 6500 | 0.2674 | 0.2350 |
| 0.3954 | 4.8 | 7000 | 0.2628 | 0.2322 |
| 0.3763 | 5.14 | 7500 | 0.2609 | 0.2284 |
| 0.3652 | 5.49 | 8000 | 0.2508 | 0.2249 |
| 0.3703 | 5.83 | 8500 | 0.2515 | 0.2235 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
silviacamplani/distilbert-base-uncased-finetuned-ner-wnut | silviacamplani | 2022-08-01T14:53:43Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-01T10:37:08Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-base-uncased-finetuned-ner-wnut
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-base-uncased-finetuned-ner-wnut
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1241
- Validation Loss: 0.3433
- Train Precision: 0.5677
- Train Recall: 0.3660
- Train F1: 0.4451
- Train Accuracy: 0.9215
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3454 | 0.4475 | 0.0 | 0.0 | 0.0 | 0.8961 | 0 |
| 0.1637 | 0.3637 | 0.6297 | 0.2990 | 0.4055 | 0.9154 | 1 |
| 0.1241 | 0.3433 | 0.5677 | 0.3660 | 0.4451 | 0.9215 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg5 | dminiotas05 | 2022-08-01T14:18:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T13:57:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft750_reg5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft750_reg5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6298
- Mse: 0.6298
- Mae: 0.6087
- R2: 0.4072
- Accuracy: 0.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 1.8617 | 1.0 | 188 | 0.7482 | 0.7482 | 0.6639 | 0.2957 | 0.4707 |
| 0.5667 | 2.0 | 376 | 0.6017 | 0.6017 | 0.5978 | 0.4336 | 0.5127 |
| 0.5038 | 3.0 | 564 | 0.6298 | 0.6298 | 0.6087 | 0.4072 | 0.4973 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-tiny-finetuned-finer-tf | muhtasham | 2022-08-01T13:41:59Z | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"dataset:nlpaueb/finer-139",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-29T22:13:44Z | ---
license: apache-2.0
datasets:
- nlpaueb/finer-139
tags:
- generated_from_keras_callback
model-index:
- name: muhtasham/bert-tiny-finetuned-finer-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# muhtasham/bert-tiny-finetuned-finer-tf
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0372
- Validation Loss: 0.0296
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 168822, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1188 | 0.0420 | 0 |
| 0.0438 | 0.0313 | 1 |
| 0.0372 | 0.0296 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rdruce/ddpm-butterflies-128 | rdruce | 2022-08-01T12:46:38Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-08-01T11:33:05Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-butterflies-128/tensorboard?#scalars)
|
sumba/covid-twitter-bert-v2-no_description-stance-loss-hyp | sumba | 2022-08-01T12:16:28Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T12:21:31Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: covid-twitter-bert-v2-no_description-stance-loss-hyp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-twitter-bert-v2-no_description-stance-loss-hyp
This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6202
- Accuracy: 0.0829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4275469935864394e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8211 | 1.0 | 632 | 0.6258 | 0.1153 |
| 0.5742 | 2.0 | 1264 | 0.6202 | 0.0829 |
| 0.4456 | 3.0 | 1896 | 0.6340 | 0.0627 |
| 0.2163 | 4.0 | 2528 | 0.7645 | 0.0470 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
BekirTaha/q-FrozenLake-v1-4x4-noSlippery | BekirTaha | 2022-08-01T12:12:18Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-01T12:01:10Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Beyko7/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
turhancan97/testpyramidsrnd | turhancan97 | 2022-08-01T12:08:01Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-08-01T12:07:56Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: turhancan97/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg3 | dminiotas05 | 2022-08-01T11:51:26Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-01T11:22:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft750_reg3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft750_reg3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6143
- Mse: 0.6143
- Mae: 0.6022
- R2: 0.4218
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.5241 | 1.0 | 188 | 0.6143 | 0.6143 | 0.6022 | 0.4218 | 0.52 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
lewiswu1209/Winnie | lewiswu1209 | 2022-08-01T10:52:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-24T03:09:47Z | ---
license: mit
---
# Winnie
Winnie是基于[cambridgeltl/simctg_lccc_dialogue](https://huggingface.co/cambridgeltl/simctg_lccc_dialogue)训练的
我修改了vocab.txt, 新增了`[NAME][NICK][GENDER][YEAROFBIRTH][MONTHOFBIRTH][DAYOFBIRTH][ZODIAC][AGE]`几个special_token,然后搞了些类似
```
你是谁?
我是[NAME]。
你叫什么?
我叫[NAME]。
你多大啦?
我[AGE]岁了。
```
的语料。
第一次训练的时候起名叫Vicky,然后把Vicky的脑子训瓦特了,只能摸索新的办法了。
后来利用了[50W闲聊语料](https://github.com/yangjianxin1/GPT2-chitchat#%E9%97%B2%E8%81%8A%E8%AF%AD%E6%96%99%E5%88%86%E4%BA%AB)搭配新增的语料按照大约19:1的比例进行训练,感觉效果还可以。 |
dminiotas05/camembert-base-finetuned-ft750_reg2 | dminiotas05 | 2022-08-01T10:10:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-28T11:03:55Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: camembert-base-finetuned-ft750_reg2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-ft750_reg2
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6449
- Mse: 0.6449
- Mae: 0.6171
- R2: 0.3929
- Accuracy: 0.504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.6283 | 1.0 | 750 | 0.6074 | 0.6074 | 0.6086 | 0.4282 | 0.4887 |
| 0.5007 | 2.0 | 1500 | 0.6449 | 0.6449 | 0.6171 | 0.3929 | 0.504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
lakshaywadhwa1993/ner_hindi_bert | lakshaywadhwa1993 | 2022-08-01T09:14:58Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-08-01T09:05:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_hindi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_hindi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3713
- Overall Precision: 0.8942
- Overall Recall: 0.8972
- Overall F1: 0.8957
- Overall Accuracy: 0.9367
- Loc F1: 0.8766
- Org F1: 0.8489
- Per F1: 0.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2993 | 3.19 | 1000 | 0.3230 | 0.8779 | 0.8786 | 0.8782 | 0.9244 | 0.8535 | 0.8270 | 0.9358 |
| 0.0641 | 6.39 | 2000 | 0.3713 | 0.8942 | 0.8972 | 0.8957 | 0.9367 | 0.8766 | 0.8489 | 0.9454 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
psroy/wav2vec2-base-timit-demo-colab | psroy | 2022-08-01T08:59:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-29T10:16:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4772
- Wer: 0.2821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6949 | 0.87 | 500 | 2.4599 | 0.9999 |
| 0.9858 | 1.73 | 1000 | 0.5249 | 0.4674 |
| 0.4645 | 2.6 | 1500 | 0.4604 | 0.3900 |
| 0.3273 | 3.46 | 2000 | 0.3939 | 0.3612 |
| 0.2474 | 4.33 | 2500 | 0.4150 | 0.3560 |
| 0.2191 | 5.19 | 3000 | 0.3855 | 0.3344 |
| 0.1662 | 6.06 | 3500 | 0.3779 | 0.3258 |
| 0.1669 | 6.92 | 4000 | 0.4841 | 0.3286 |
| 0.151 | 7.79 | 4500 | 0.4182 | 0.3219 |
| 0.1175 | 8.65 | 5000 | 0.4194 | 0.3107 |
| 0.1103 | 9.52 | 5500 | 0.4256 | 0.3129 |
| 0.1 | 10.38 | 6000 | 0.4352 | 0.3089 |
| 0.0949 | 11.25 | 6500 | 0.4649 | 0.3160 |
| 0.0899 | 12.11 | 7000 | 0.4472 | 0.3065 |
| 0.0787 | 12.98 | 7500 | 0.4763 | 0.3128 |
| 0.0742 | 13.84 | 8000 | 0.4321 | 0.3034 |
| 0.067 | 14.71 | 8500 | 0.4562 | 0.3076 |
| 0.063 | 15.57 | 9000 | 0.4541 | 0.3102 |
| 0.0624 | 16.44 | 9500 | 0.5113 | 0.3040 |
| 0.0519 | 17.3 | 10000 | 0.4925 | 0.3008 |
| 0.0525 | 18.17 | 10500 | 0.4710 | 0.2987 |
| 0.046 | 19.03 | 11000 | 0.4781 | 0.2977 |
| 0.0455 | 19.9 | 11500 | 0.4572 | 0.2969 |
| 0.0394 | 20.76 | 12000 | 0.5256 | 0.2966 |
| 0.0373 | 21.63 | 12500 | 0.4723 | 0.2921 |
| 0.0375 | 22.49 | 13000 | 0.4640 | 0.2847 |
| 0.0334 | 23.36 | 13500 | 0.4740 | 0.2917 |
| 0.0304 | 24.22 | 14000 | 0.4817 | 0.2874 |
| 0.0291 | 25.09 | 14500 | 0.4722 | 0.2896 |
| 0.0247 | 25.95 | 15000 | 0.4765 | 0.2870 |
| 0.0223 | 26.82 | 15500 | 0.4728 | 0.2821 |
| 0.0223 | 27.68 | 16000 | 0.4690 | 0.2834 |
| 0.0207 | 28.55 | 16500 | 0.4706 | 0.2825 |
| 0.0186 | 29.41 | 17000 | 0.4772 | 0.2821 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
lakshaywadhwa1993/ner_marathi_bert | lakshaywadhwa1993 | 2022-08-01T08:39:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-06-09T21:00:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_marathi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_marathi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3606
- Overall Precision: 0.8939
- Overall Recall: 0.9030
- Overall F1: 0.8984
- Overall Accuracy: 0.9347
- Loc F1: 0.8823
- Org F1: 0.8555
- Per F1: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2961 | 3.19 | 1000 | 0.3496 | 0.8720 | 0.8841 | 0.8780 | 0.9229 | 0.8599 | 0.8210 | 0.9343 |
| 0.0613 | 6.39 | 2000 | 0.3606 | 0.8939 | 0.9030 | 0.8984 | 0.9347 | 0.8823 | 0.8555 | 0.9435 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
meln1k/a2c-AntBulletEnv-v0 | meln1k | 2022-08-01T08:04:22Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-01T08:03:39Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 2061.72 +/- 70.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
BekirTaha/ppo-LunarLander-v2 | BekirTaha | 2022-08-01T07:53:28Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
]
| reinforcement-learning | 2022-08-01T06:40:27Z | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
---
# "Beyko7/ppo-LunarLander-v2"
This is a pre-trained model of a PPO agent playing LunarLander-v2 using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="Beyko7/ppo-LunarLander-v2", filename="LunarLander-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: 248.30 +/- 23.32882124373712
---
|
huggingtweets/kantegory | huggingtweets | 2022-08-01T07:26:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-08-01T07:26:04Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kantegory/1659338795219/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1122432883036172288/mYZ4acNy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">David Dobryakov</div>
<div style="text-align: center; font-size: 14px;">@kantegory</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from David Dobryakov.
| Data | David Dobryakov |
| --- | --- |
| Tweets downloaded | 3017 |
| Retweets | 90 |
| Short tweets | 256 |
| Tweets kept | 2671 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1g9yc7mp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kantegory's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2aeg6rk1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kantegory')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
KDHyun08/TAACO_STS | KDHyun08 | 2022-08-01T05:00:14Z | 2,406 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"TAACO",
"ko",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-07-25T08:19:31Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
- TAACO
language: ko
---
# TAACO_Similarity
본 모델은 [Sentence-transformers](https://www.SBERT.net)를 기반으로 하며 KLUE의 STS(Sentence Textual Similarity) 데이터셋을 통해 훈련을 진행한 모델입니다.
필자가 제작하고 있는 한국어 문장간 결속성 측정 도구인 K-TAACO(가제)의 지표 중 하나인 문장 간 의미적 결속성을 측정하기 위해 본 모델을 제작하였습니다.
또한 모두의 말뭉치의 문장간 유사도 데이터 등 다양한 데이터를 구해 추가 훈련을 진행할 예정입니다.
## Train Data
KLUE-sts-v1.1._train.json
NLI-sts-train.tsv
## Usage (Sentence-Transformers)
본 모델을 사용하기 위해서는 [Sentence-transformers](https://www.SBERT.net)를 설치하여야 합니다.
```
pip install -U sentence-transformers
```
모델을 사용하기 위해서는 아래 코드를 참조하시길 바랍니다.
```python
from sentence_transformers import SentenceTransformer, models
sentences = ["This is an example sentence", "Each sentence is converted"]
embedding_model = models.Transformer(
model_name_or_path="KDHyun08/TAACO_STS",
max_seq_length=256,
do_lower_case=True
)
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (실제 문장 간 유사도 비교)
[Sentence-transformers](https://www.SBERT.net) 를 설치한 후 아래 내용과 같이 문장 간 유사도를 비교할 수 있습니다.
query 변수는 비교 기준이 되는 문장(Source Sentence)이고 비교를 진행할 문장은 docs에 list 형식으로 구성하시면 됩니다.
```python
from sentence_transformers import SentenceTransformer, models
embedding_model = models.Transformer(
model_name_or_path="KDHyun08/TAACO_STS",
max_seq_length=256,
do_lower_case=True
)
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
docs = ['어제는 아내의 생일이었다', '생일을 맞이하여 아침을 준비하겠다고 오전 8시 30분부터 음식을 준비하였다. 주된 메뉴는 스테이크와 낙지볶음, 미역국, 잡채, 소야 등이었다', '스테이크는 자주 하는 음식이어서 자신이 준비하려고 했다', '앞뒤도 1분씩 3번 뒤집고 래스팅을 잘 하면 육즙이 가득한 스테이크가 준비되다', '아내도 그런 스테이크를 좋아한다. 그런데 상상도 못한 일이 벌이지고 말았다', '보통 시즈닝이 되지 않은 원육을 사서 스테이크를 했는데, 이번에는 시즈닝이 된 부챗살을 구입해서 했다', '그런데 케이스 안에 방부제가 들어있는 것을 인지하지 못하고 방부제와 동시에 프라이팬에 올려놓을 것이다', '그것도 인지 못한 체... 앞면을 센 불에 1분을 굽고 뒤집는 순간 방부제가 함께 구어진 것을 알았다', '아내의 생일이라 맛있게 구워보고 싶었는데 어처구니없는 상황이 발생한 것이다', '방부제가 센 불에 녹아서 그런지 물처럼 흘러내렸다', ' 고민을 했다. 방부제가 묻은 부문만 제거하고 다시 구울까 했는데 방부제에 절대 먹지 말라는 문구가 있어서 아깝지만 버리는 방향을 했다', '너무나 안타까웠다', '아침 일찍 아내가 좋아하는 스테이크를 준비하고 그것을 맛있게 먹는 아내의 모습을 보고 싶었는데 전혀 생각지도 못한 상황이 발생해서... 하지만 정신을 추스르고 바로 다른 메뉴로 변경했다', '소야, 소시지 야채볶음..', '아내가 좋아하는지 모르겠지만 냉장고 안에 있는 후랑크소세지를 보니 바로 소야를 해야겠다는 생각이 들었다. 음식은 성공적으로 완성이 되었다', '40번째를 맞이하는 아내의 생일은 성공적으로 준비가 되었다', '맛있게 먹어 준 아내에게도 감사했다', '매년 아내의 생일에 맞이하면 아침마다 생일을 차려야겠다. 오늘도 즐거운 하루가 되었으면 좋겠다', '생일이니까~']
#각 문장의 vector값 encoding
document_embeddings = model.encode(docs)
query = '생일을 맞이하여 아침을 준비하겠다고 오전 8시 30분부터 음식을 준비하였다'
query_embedding = model.encode(query)
top_k = min(10, len(docs))
# 코사인 유사도 계산 후,
cos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0]
# 코사인 유사도 순으로 문장 추출
top_results = torch.topk(cos_scores, k=top_k)
print(f"입력 문장: {query}")
print(f"\n<입력 문장과 유사한 {top_k} 개의 문장>\n")
for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
print(f"{i+1}: {docs[idx]} {'(유사도: {:.4f})'.format(score)}\n")
```
## Evaluation Results
위 Usage를 실행하게 되면 아래와 같은 결과가 도출됩니다. 1에 가까울수록 유사한 문장입니다.
```
입력 문장: 생일을 맞이하여 아침을 준비하겠다고 오전 8시 30분부터 음식을 준비하였다
<입력 문장과 유사한 10 개의 문장>
1: 생일을 맞이하여 아침을 준비하겠다고 오전 8시 30분부터 음식을 준비하였다. 주된 메뉴는 스테이크와 낙지볶음, 미역국, 잡채, 소야 등이었다 (유사도: 0.6687)
2: 매년 아내의 생일에 맞이하면 아침마다 생일을 차려야겠다. 오늘도 즐거운 하루가 되었으면 좋겠다 (유사도: 0.6468)
3: 40번째를 맞이하는 아내의 생일은 성공적으로 준비가 되었다 (유사도: 0.4647)
4: 아내의 생일이라 맛있게 구워보고 싶었는데 어처구니없는 상황이 발생한 것이다 (유사도: 0.4469)
5: 생일이니까~ (유사도: 0.4218)
6: 어제는 아내의 생일이었다 (유사도: 0.4192)
7: 아침 일찍 아내가 좋아하는 스테이크를 준비하고 그것을 맛있게 먹는 아내의 모습을 보고 싶었는데 전혀 생각지도 못한 상황이 발생해서... 하지만 정신을 추스르고 바로 다른 메뉴로 변경했다 (유사도: 0.4156)
8: 맛있게 먹어 준 아내에게도 감사했다 (유사도: 0.3093)
9: 아내가 좋아하는지 모르겠지만 냉장고 안에 있는 후랑크소세지를 보니 바로 소야를 해야겠다는 생각이 들었다. 음식은 성공적으로 완성이 되었다 (유사도: 0.2259)
10: 아내도 그런 스테이크를 좋아한다. 그런데 상상도 못한 일이 벌이지고 말았다 (유사도: 0.1967)
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 142 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
wenkai-li/distilroberta-base-finetuned-marktextepoch_n200 | wenkai-li | 2022-08-01T04:07:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-31T18:33:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-marktextepoch_n200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-marktextepoch_n200
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.2313 | 1.0 | 1500 | 2.1592 |
| 2.1731 | 2.0 | 3000 | 2.1277 |
| 2.153 | 3.0 | 4500 | 2.1144 |
| 2.1469 | 4.0 | 6000 | 2.1141 |
| 2.1281 | 5.0 | 7500 | 2.1374 |
| 2.1043 | 6.0 | 9000 | 2.1069 |
| 2.0834 | 7.0 | 10500 | 2.0993 |
| 2.0602 | 8.0 | 12000 | 2.0817 |
| 2.024 | 9.0 | 13500 | 2.0918 |
| 2.0261 | 10.0 | 15000 | 2.0793 |
| 1.9889 | 11.0 | 16500 | 2.0567 |
| 1.9915 | 12.0 | 18000 | 2.0700 |
| 1.9532 | 13.0 | 19500 | 2.0436 |
| 1.9362 | 14.0 | 21000 | 2.0596 |
| 1.9024 | 15.0 | 22500 | 2.0189 |
| 1.9262 | 16.0 | 24000 | 2.0435 |
| 1.8883 | 17.0 | 25500 | 2.0430 |
| 1.8867 | 18.0 | 27000 | 2.0416 |
| 1.8807 | 19.0 | 28500 | 2.0051 |
| 1.8517 | 20.0 | 30000 | 2.0338 |
| 1.8357 | 21.0 | 31500 | 2.0166 |
| 1.8241 | 22.0 | 33000 | 2.0355 |
| 1.7985 | 23.0 | 34500 | 2.0073 |
| 1.8061 | 24.0 | 36000 | 2.0473 |
| 1.7996 | 25.0 | 37500 | 2.0446 |
| 1.7786 | 26.0 | 39000 | 2.0086 |
| 1.771 | 27.0 | 40500 | 2.0294 |
| 1.7549 | 28.0 | 42000 | 2.0127 |
| 1.7726 | 29.0 | 43500 | 2.0191 |
| 1.7275 | 30.0 | 45000 | 2.0182 |
| 1.708 | 31.0 | 46500 | 2.0130 |
| 1.7345 | 32.0 | 48000 | 2.0155 |
| 1.7044 | 33.0 | 49500 | 1.9898 |
| 1.7126 | 34.0 | 51000 | 2.0166 |
| 1.698 | 35.0 | 52500 | 1.9879 |
| 1.6637 | 36.0 | 54000 | 2.0311 |
| 1.6854 | 37.0 | 55500 | 2.0355 |
| 1.6585 | 38.0 | 57000 | 2.0094 |
| 1.6418 | 39.0 | 58500 | 2.0042 |
| 1.667 | 40.0 | 60000 | 2.0116 |
| 1.6507 | 41.0 | 61500 | 2.0095 |
| 1.622 | 42.0 | 63000 | 2.0158 |
| 1.6381 | 43.0 | 64500 | 2.0339 |
| 1.6099 | 44.0 | 66000 | 2.0082 |
| 1.6076 | 45.0 | 67500 | 2.0207 |
| 1.5805 | 46.0 | 69000 | 2.0172 |
| 1.5862 | 47.0 | 70500 | 2.0132 |
| 1.5806 | 48.0 | 72000 | 2.0198 |
| 1.574 | 49.0 | 73500 | 2.0181 |
| 1.5718 | 50.0 | 75000 | 2.0086 |
| 1.5591 | 51.0 | 76500 | 1.9832 |
| 1.5468 | 52.0 | 78000 | 2.0167 |
| 1.5637 | 53.0 | 79500 | 2.0118 |
| 1.5117 | 54.0 | 81000 | 2.0290 |
| 1.5363 | 55.0 | 82500 | 2.0011 |
| 1.4976 | 56.0 | 84000 | 2.0160 |
| 1.5129 | 57.0 | 85500 | 2.0224 |
| 1.4964 | 58.0 | 87000 | 2.0219 |
| 1.4906 | 59.0 | 88500 | 2.0212 |
| 1.4941 | 60.0 | 90000 | 2.0255 |
| 1.4876 | 61.0 | 91500 | 2.0116 |
| 1.4837 | 62.0 | 93000 | 2.0176 |
| 1.4661 | 63.0 | 94500 | 2.0388 |
| 1.4634 | 64.0 | 96000 | 2.0165 |
| 1.4449 | 65.0 | 97500 | 2.0185 |
| 1.468 | 66.0 | 99000 | 2.0246 |
| 1.4567 | 67.0 | 100500 | 2.0244 |
| 1.4367 | 68.0 | 102000 | 2.0093 |
| 1.4471 | 69.0 | 103500 | 2.0101 |
| 1.4255 | 70.0 | 105000 | 2.0248 |
| 1.4203 | 71.0 | 106500 | 2.0224 |
| 1.42 | 72.0 | 108000 | 2.0279 |
| 1.4239 | 73.0 | 109500 | 2.0295 |
| 1.4126 | 74.0 | 111000 | 2.0196 |
| 1.4038 | 75.0 | 112500 | 2.0225 |
| 1.3874 | 76.0 | 114000 | 2.0456 |
| 1.3758 | 77.0 | 115500 | 2.0423 |
| 1.3924 | 78.0 | 117000 | 2.0184 |
| 1.3744 | 79.0 | 118500 | 2.0555 |
| 1.3622 | 80.0 | 120000 | 2.0387 |
| 1.3653 | 81.0 | 121500 | 2.0344 |
| 1.3724 | 82.0 | 123000 | 2.0184 |
| 1.3684 | 83.0 | 124500 | 2.0285 |
| 1.3576 | 84.0 | 126000 | 2.0544 |
| 1.348 | 85.0 | 127500 | 2.0412 |
| 1.3387 | 86.0 | 129000 | 2.0459 |
| 1.3416 | 87.0 | 130500 | 2.0329 |
| 1.3421 | 88.0 | 132000 | 2.0274 |
| 1.3266 | 89.0 | 133500 | 2.0233 |
| 1.3183 | 90.0 | 135000 | 2.0319 |
| 1.322 | 91.0 | 136500 | 2.0080 |
| 1.32 | 92.0 | 138000 | 2.0472 |
| 1.304 | 93.0 | 139500 | 2.0538 |
| 1.3061 | 94.0 | 141000 | 2.0340 |
| 1.3199 | 95.0 | 142500 | 2.0456 |
| 1.2985 | 96.0 | 144000 | 2.0167 |
| 1.3021 | 97.0 | 145500 | 2.0204 |
| 1.2787 | 98.0 | 147000 | 2.0645 |
| 1.2879 | 99.0 | 148500 | 2.0345 |
| 1.2695 | 100.0 | 150000 | 2.0340 |
| 1.2884 | 101.0 | 151500 | 2.0602 |
| 1.2747 | 102.0 | 153000 | 2.0667 |
| 1.2607 | 103.0 | 154500 | 2.0551 |
| 1.2551 | 104.0 | 156000 | 2.0544 |
| 1.2557 | 105.0 | 157500 | 2.0553 |
| 1.2495 | 106.0 | 159000 | 2.0370 |
| 1.26 | 107.0 | 160500 | 2.0568 |
| 1.2499 | 108.0 | 162000 | 2.0427 |
| 1.2438 | 109.0 | 163500 | 2.0184 |
| 1.2496 | 110.0 | 165000 | 2.0227 |
| 1.2332 | 111.0 | 166500 | 2.0621 |
| 1.2231 | 112.0 | 168000 | 2.0661 |
| 1.211 | 113.0 | 169500 | 2.0673 |
| 1.217 | 114.0 | 171000 | 2.0544 |
| 1.2206 | 115.0 | 172500 | 2.0542 |
| 1.2083 | 116.0 | 174000 | 2.0592 |
| 1.2205 | 117.0 | 175500 | 2.0451 |
| 1.2065 | 118.0 | 177000 | 2.0402 |
| 1.1988 | 119.0 | 178500 | 2.0615 |
| 1.218 | 120.0 | 180000 | 2.0374 |
| 1.1917 | 121.0 | 181500 | 2.0349 |
| 1.1854 | 122.0 | 183000 | 2.0790 |
| 1.1819 | 123.0 | 184500 | 2.0766 |
| 1.2029 | 124.0 | 186000 | 2.0364 |
| 1.1851 | 125.0 | 187500 | 2.0568 |
| 1.1734 | 126.0 | 189000 | 2.0445 |
| 1.1701 | 127.0 | 190500 | 2.0770 |
| 1.1824 | 128.0 | 192000 | 2.0566 |
| 1.1604 | 129.0 | 193500 | 2.0542 |
| 1.1733 | 130.0 | 195000 | 2.0525 |
| 1.1743 | 131.0 | 196500 | 2.0577 |
| 1.1692 | 132.0 | 198000 | 2.0723 |
| 1.1519 | 133.0 | 199500 | 2.0567 |
| 1.1401 | 134.0 | 201000 | 2.0795 |
| 1.1692 | 135.0 | 202500 | 2.0625 |
| 1.157 | 136.0 | 204000 | 2.0793 |
| 1.1495 | 137.0 | 205500 | 2.0782 |
| 1.1479 | 138.0 | 207000 | 2.0392 |
| 1.1247 | 139.0 | 208500 | 2.0796 |
| 1.143 | 140.0 | 210000 | 2.0369 |
| 1.1324 | 141.0 | 211500 | 2.0699 |
| 1.1341 | 142.0 | 213000 | 2.0694 |
| 1.1317 | 143.0 | 214500 | 2.0569 |
| 1.1254 | 144.0 | 216000 | 2.0545 |
| 1.1156 | 145.0 | 217500 | 2.0708 |
| 1.1353 | 146.0 | 219000 | 2.0767 |
| 1.1312 | 147.0 | 220500 | 2.0523 |
| 1.1224 | 148.0 | 222000 | 2.0565 |
| 1.106 | 149.0 | 223500 | 2.0696 |
| 1.1069 | 150.0 | 225000 | 2.0478 |
| 1.1011 | 151.0 | 226500 | 2.0475 |
| 1.0985 | 152.0 | 228000 | 2.0888 |
| 1.1107 | 153.0 | 229500 | 2.0756 |
| 1.1058 | 154.0 | 231000 | 2.0812 |
| 1.1027 | 155.0 | 232500 | 2.0597 |
| 1.0996 | 156.0 | 234000 | 2.0684 |
| 1.0987 | 157.0 | 235500 | 2.0629 |
| 1.0881 | 158.0 | 237000 | 2.0701 |
| 1.1143 | 159.0 | 238500 | 2.0740 |
| 1.0823 | 160.0 | 240000 | 2.0869 |
| 1.0925 | 161.0 | 241500 | 2.0567 |
| 1.1034 | 162.0 | 243000 | 2.0833 |
| 1.0759 | 163.0 | 244500 | 2.0585 |
| 1.0998 | 164.0 | 246000 | 2.0293 |
| 1.0891 | 165.0 | 247500 | 2.0608 |
| 1.1036 | 166.0 | 249000 | 2.0831 |
| 1.076 | 167.0 | 250500 | 2.0979 |
| 1.0895 | 168.0 | 252000 | 2.0882 |
| 1.0825 | 169.0 | 253500 | 2.0742 |
| 1.0793 | 170.0 | 255000 | 2.0841 |
| 1.079 | 171.0 | 256500 | 2.0829 |
| 1.0653 | 172.0 | 258000 | 2.0888 |
| 1.0834 | 173.0 | 259500 | 2.0784 |
| 1.0721 | 174.0 | 261000 | 2.0859 |
| 1.0712 | 175.0 | 262500 | 2.0810 |
| 1.0494 | 176.0 | 264000 | 2.0605 |
| 1.0654 | 177.0 | 265500 | 2.0623 |
| 1.077 | 178.0 | 267000 | 2.0756 |
| 1.056 | 179.0 | 268500 | 2.0782 |
| 1.0523 | 180.0 | 270000 | 2.0966 |
| 1.0656 | 181.0 | 271500 | 2.0750 |
| 1.0636 | 182.0 | 273000 | 2.0769 |
| 1.0851 | 183.0 | 274500 | 2.0872 |
| 1.0562 | 184.0 | 276000 | 2.0893 |
| 1.0534 | 185.0 | 277500 | 2.0661 |
| 1.0514 | 186.0 | 279000 | 2.0712 |
| 1.062 | 187.0 | 280500 | 2.0769 |
| 1.0683 | 188.0 | 282000 | 2.0765 |
| 1.0606 | 189.0 | 283500 | 2.0735 |
| 1.0555 | 190.0 | 285000 | 2.0710 |
| 1.0568 | 191.0 | 286500 | 2.0860 |
| 1.0502 | 192.0 | 288000 | 2.0587 |
| 1.0437 | 193.0 | 289500 | 2.0998 |
| 1.0534 | 194.0 | 291000 | 2.0418 |
| 1.062 | 195.0 | 292500 | 2.0724 |
| 1.0457 | 196.0 | 294000 | 2.0612 |
| 1.0501 | 197.0 | 295500 | 2.1012 |
| 1.0728 | 198.0 | 297000 | 2.0721 |
| 1.0413 | 199.0 | 298500 | 2.0535 |
| 1.0461 | 200.0 | 300000 | 2.0531 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Izarel/distilbert-base-uncased_fine_tuned_body_text | Izarel | 2022-08-01T03:52:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T19:03:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: distilbert-base-uncased_fine_tuned_body_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned_body_text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2153
- Accuracy: {'accuracy': 0.8827265261428963}
- Recall: {'recall': 0.8641975308641975}
- Precision: {'precision': 0.8900034993584509}
- F1: {'f1': 0.8769106999195494}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:|
| 0.3056 | 1.0 | 2284 | 0.3040 | {'accuracy': 0.8874897344648235} | {'recall': 0.8466417487824216} | {'precision': 0.914261252446184} | {'f1': 0.8791531902381653} |
| 0.2279 | 2.0 | 4568 | 0.2891 | {'accuracy': 0.8908294552422666} | {'recall': 0.8606863744478424} | {'precision': 0.9086452230060983} | {'f1': 0.8840158213122382} |
| 0.1467 | 3.0 | 6852 | 0.3580 | {'accuracy': 0.8882562277580072} | {'recall': 0.8452825914599615} | {'precision': 0.9170557876628164} | {'f1': 0.8797076678257796} |
| 0.0921 | 4.0 | 9136 | 0.4560 | {'accuracy': 0.8754448398576512} | {'recall': 0.8948918337297542} | {'precision': 0.8543468858131488} | {'f1': 0.8741494717043756} |
| 0.0587 | 5.0 | 11420 | 0.5701 | {'accuracy': 0.8768135778811935} | {'recall': 0.8139087099331748} | {'precision': 0.9221095855254716} | {'f1': 0.8646372277704246} |
| 0.0448 | 6.0 | 13704 | 0.6738 | {'accuracy': 0.8767040788393101} | {'recall': 0.8794880507418734} | {'precision': 0.8673070479168994} | {'f1': 0.873355078168935} |
| 0.0289 | 7.0 | 15988 | 0.7965 | {'accuracy': 0.8798248015329866} | {'recall': 0.8491335372069317} | {'precision': 0.8967703349282297} | {'f1': 0.8723020536389552} |
| 0.0214 | 8.0 | 18272 | 0.8244 | {'accuracy': 0.8811387900355871} | {'recall': 0.8576282704723072} | {'precision': 0.8922931887815225} | {'f1': 0.8746173837712965} |
| 0.0147 | 9.0 | 20556 | 0.8740 | {'accuracy': 0.8806460443471119} | {'recall': 0.8669158455091177} | {'precision': 0.8839357893521191} | {'f1': 0.8753430924062213} |
| 0.0099 | 10.0 | 22840 | 0.9716 | {'accuracy': 0.8788940596769779} | {'recall': 0.8694076339336279} | {'precision': 0.8787635947338294} | {'f1': 0.8740605784559327} |
| 0.0092 | 11.0 | 25124 | 1.0296 | {'accuracy': 0.8822885299753627} | {'recall': 0.8669158455091177} | {'precision': 0.8870089233978444} | {'f1': 0.876847290640394} |
| 0.0039 | 12.0 | 27408 | 1.0974 | {'accuracy': 0.8787845606350945} | {'recall': 0.8628383735417374} | {'precision': 0.8836561883772184} | {'f1': 0.8731232091690544} |
| 0.0053 | 13.0 | 29692 | 1.0833 | {'accuracy': 0.8799890500958116} | {'recall': 0.8503794314191868} | {'precision': 0.8960496479293472} | {'f1': 0.8726173872617387} |
| 0.0032 | 14.0 | 31976 | 1.1731 | {'accuracy': 0.8813030385984123} | {'recall': 0.8705402650356778} | {'precision': 0.8823326828148318} | {'f1': 0.8763968072976055} |
| 0.0017 | 15.0 | 34260 | 1.2153 | {'accuracy': 0.8827265261428963} | {'recall': 0.8641975308641975} | {'precision': 0.8900034993584509} | {'f1': 0.8769106999195494} |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
reachrkr/Cartpole-v1 | reachrkr | 2022-08-01T02:16:58Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-01T02:16:50Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Cartpole-v1
results:
- metrics:
- type: mean_reward
value: 40.00 +/- 18.57
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
notmaineyy/bert-base-multilingual-cased-finetuned-ner | notmaineyy | 2022-08-01T01:37:57Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-21T01:33:49Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: notmaineyy/bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# notmaineyy/bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0248
- Validation Loss: 0.0568
- Train Precision: 0.9424
- Train Recall: 0.9471
- Train F1: 0.9448
- Train Accuracy: 0.9863
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1335 | 0.0705 | 0.9152 | 0.9204 | 0.9178 | 0.9806 | 0 |
| 0.0497 | 0.0562 | 0.9335 | 0.9472 | 0.9403 | 0.9851 | 1 |
| 0.0248 | 0.0568 | 0.9424 | 0.9471 | 0.9448 | 0.9863 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RedPandaAINLP/opus-mt-en-ro-finetuned-en-to-ro | RedPandaAINLP | 2022-08-01T00:11:22Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-31T22:39:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: train
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_4_ternary | elopezlopez | 2022-08-01T00:10:08Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T23:52:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_4_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_4_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2981
- F1: 0.7565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5588 | 0.6984 |
| 0.5547 | 2.0 | 578 | 0.5283 | 0.7336 |
| 0.5547 | 3.0 | 867 | 0.7038 | 0.7202 |
| 0.2479 | 4.0 | 1156 | 0.8949 | 0.7284 |
| 0.2479 | 5.0 | 1445 | 0.9959 | 0.7286 |
| 0.1181 | 6.0 | 1734 | 1.0663 | 0.7311 |
| 0.0508 | 7.0 | 2023 | 1.2377 | 0.7054 |
| 0.0508 | 8.0 | 2312 | 1.2981 | 0.7565 |
| 0.0185 | 9.0 | 2601 | 1.3532 | 0.7407 |
| 0.0185 | 10.0 | 2890 | 1.5365 | 0.7333 |
| 0.0103 | 11.0 | 3179 | 1.5184 | 0.7423 |
| 0.0103 | 12.0 | 3468 | 1.6009 | 0.7420 |
| 0.0123 | 13.0 | 3757 | 1.6395 | 0.7402 |
| 0.008 | 14.0 | 4046 | 1.6838 | 0.7429 |
| 0.008 | 15.0 | 4335 | 1.6176 | 0.7490 |
| 0.0012 | 16.0 | 4624 | 1.7873 | 0.7345 |
| 0.0012 | 17.0 | 4913 | 1.6761 | 0.7412 |
| 0.0044 | 18.0 | 5202 | 1.7356 | 0.7417 |
| 0.0044 | 19.0 | 5491 | 1.7686 | 0.7502 |
| 0.0045 | 20.0 | 5780 | 1.7668 | 0.7406 |
| 0.0017 | 21.0 | 6069 | 1.8411 | 0.7381 |
| 0.0017 | 22.0 | 6358 | 1.8147 | 0.7469 |
| 0.0012 | 23.0 | 6647 | 1.8028 | 0.7489 |
| 0.0012 | 24.0 | 6936 | 1.8147 | 0.7453 |
| 0.0026 | 25.0 | 7225 | 1.8257 | 0.7475 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/tiny-random-vit-finetuned-eurosat | keithanpai | 2022-08-01T00:08:25Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-08-01T00:06:15Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: tiny-random-vit-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6646706586826348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-random-vit-finetuned-eurosat
This model is a fine-tuned version of [hf-internal-testing/tiny-random-vit](https://huggingface.co/hf-internal-testing/tiny-random-vit) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0488
- Accuracy: 0.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1192 | 0.99 | 70 | 1.0867 | 0.6627 |
| 1.067 | 1.99 | 140 | 1.0563 | 0.6657 |
| 0.9719 | 2.99 | 210 | 1.0488 | 0.6647 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_3_binary | elopezlopez | 2022-07-31T23:37:52Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T23:14:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_3_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_3_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3616
- F1: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4668 | 0.7666 |
| 0.4142 | 2.0 | 578 | 0.4259 | 0.7631 |
| 0.4142 | 3.0 | 867 | 0.6744 | 0.7492 |
| 0.235 | 4.0 | 1156 | 0.8879 | 0.7678 |
| 0.235 | 5.0 | 1445 | 1.0036 | 0.7639 |
| 0.1297 | 6.0 | 1734 | 1.1427 | 0.7616 |
| 0.0894 | 7.0 | 2023 | 1.2126 | 0.7626 |
| 0.0894 | 8.0 | 2312 | 1.5098 | 0.7433 |
| 0.0473 | 9.0 | 2601 | 1.3616 | 0.7758 |
| 0.0473 | 10.0 | 2890 | 1.5966 | 0.7579 |
| 0.0325 | 11.0 | 3179 | 1.6669 | 0.7508 |
| 0.0325 | 12.0 | 3468 | 1.7401 | 0.7437 |
| 0.0227 | 13.0 | 3757 | 1.7797 | 0.7515 |
| 0.0224 | 14.0 | 4046 | 1.7349 | 0.7418 |
| 0.0224 | 15.0 | 4335 | 1.7527 | 0.7595 |
| 0.0152 | 16.0 | 4624 | 1.7492 | 0.7634 |
| 0.0152 | 17.0 | 4913 | 1.8178 | 0.7628 |
| 0.0117 | 18.0 | 5202 | 1.7736 | 0.7688 |
| 0.0117 | 19.0 | 5491 | 1.8449 | 0.7704 |
| 0.0055 | 20.0 | 5780 | 1.8687 | 0.7652 |
| 0.0065 | 21.0 | 6069 | 1.8083 | 0.7669 |
| 0.0065 | 22.0 | 6358 | 1.8568 | 0.7559 |
| 0.0054 | 23.0 | 6647 | 1.8760 | 0.7678 |
| 0.0054 | 24.0 | 6936 | 1.8948 | 0.7697 |
| 0.0048 | 25.0 | 7225 | 1.9109 | 0.7680 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_ternary | elopezlopez | 2022-07-31T23:35:04Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T23:17:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5810
- F1: 0.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.5886 | 0.7239 |
| 0.557 | 2.0 | 588 | 0.5085 | 0.7524 |
| 0.557 | 3.0 | 882 | 0.6332 | 0.7530 |
| 0.2456 | 4.0 | 1176 | 0.8749 | 0.7161 |
| 0.2456 | 5.0 | 1470 | 1.0601 | 0.7371 |
| 0.1112 | 6.0 | 1764 | 1.1885 | 0.7451 |
| 0.0484 | 7.0 | 2058 | 1.3027 | 0.7240 |
| 0.0484 | 8.0 | 2352 | 1.4647 | 0.7259 |
| 0.0259 | 9.0 | 2646 | 1.4476 | 0.7322 |
| 0.0259 | 10.0 | 2940 | 1.4826 | 0.7388 |
| 0.0164 | 11.0 | 3234 | 1.5869 | 0.7333 |
| 0.0109 | 12.0 | 3528 | 1.5954 | 0.7539 |
| 0.0109 | 13.0 | 3822 | 1.5810 | 0.7620 |
| 0.0082 | 14.0 | 4116 | 1.7165 | 0.7335 |
| 0.0082 | 15.0 | 4410 | 1.8152 | 0.7414 |
| 0.004 | 16.0 | 4704 | 1.7411 | 0.7474 |
| 0.004 | 17.0 | 4998 | 1.8692 | 0.7355 |
| 0.0034 | 18.0 | 5292 | 1.8727 | 0.7303 |
| 0.0009 | 19.0 | 5586 | 1.9813 | 0.7305 |
| 0.0009 | 20.0 | 5880 | 1.9764 | 0.7391 |
| 0.0012 | 21.0 | 6174 | 2.0170 | 0.7291 |
| 0.0012 | 22.0 | 6468 | 2.0240 | 0.7391 |
| 0.0004 | 23.0 | 6762 | 2.0311 | 0.7352 |
| 0.0014 | 24.0 | 7056 | 2.0174 | 0.7334 |
| 0.0014 | 25.0 | 7350 | 2.0282 | 0.7381 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/vit-base-patch32-384-finetuned-eurosat | keithanpai | 2022-07-31T22:51:54Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-07-31T19:46:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch32-384-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8423153692614771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-384-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4381
- Accuracy: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.607 | 0.99 | 70 | 0.5609 | 0.8014 |
| 0.5047 | 1.99 | 140 | 0.4634 | 0.8373 |
| 0.4089 | 2.99 | 210 | 0.4381 | 0.8423 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
wenkai-li/distilroberta-base-finetuned-wikitextepoch_150 | wenkai-li | 2022-07-31T22:09:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-31T18:31:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitextepoch_150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitextepoch_150
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.2428 | 1.0 | 1121 | 2.0500 |
| 2.1209 | 2.0 | 2242 | 1.9996 |
| 2.0665 | 3.0 | 3363 | 1.9501 |
| 2.0179 | 4.0 | 4484 | 1.9311 |
| 1.9759 | 5.0 | 5605 | 1.9255 |
| 1.9089 | 6.0 | 6726 | 1.8805 |
| 1.9143 | 7.0 | 7847 | 1.8715 |
| 1.8744 | 8.0 | 8968 | 1.8671 |
| 1.858 | 9.0 | 10089 | 1.8592 |
| 1.8141 | 10.0 | 11210 | 1.8578 |
| 1.7917 | 11.0 | 12331 | 1.8574 |
| 1.7752 | 12.0 | 13452 | 1.8423 |
| 1.7722 | 13.0 | 14573 | 1.8287 |
| 1.7354 | 14.0 | 15694 | 1.8396 |
| 1.7217 | 15.0 | 16815 | 1.8244 |
| 1.6968 | 16.0 | 17936 | 1.8278 |
| 1.659 | 17.0 | 19057 | 1.8412 |
| 1.6442 | 18.0 | 20178 | 1.8328 |
| 1.6441 | 19.0 | 21299 | 1.8460 |
| 1.6267 | 20.0 | 22420 | 1.8343 |
| 1.612 | 21.0 | 23541 | 1.8249 |
| 1.5963 | 22.0 | 24662 | 1.8253 |
| 1.6101 | 23.0 | 25783 | 1.7843 |
| 1.5747 | 24.0 | 26904 | 1.8047 |
| 1.5559 | 25.0 | 28025 | 1.8618 |
| 1.5484 | 26.0 | 29146 | 1.8660 |
| 1.5411 | 27.0 | 30267 | 1.8318 |
| 1.5247 | 28.0 | 31388 | 1.8216 |
| 1.5278 | 29.0 | 32509 | 1.8075 |
| 1.4954 | 30.0 | 33630 | 1.8073 |
| 1.4863 | 31.0 | 34751 | 1.7958 |
| 1.4821 | 32.0 | 35872 | 1.8080 |
| 1.4357 | 33.0 | 36993 | 1.8373 |
| 1.4602 | 34.0 | 38114 | 1.8199 |
| 1.447 | 35.0 | 39235 | 1.8325 |
| 1.4292 | 36.0 | 40356 | 1.8075 |
| 1.4174 | 37.0 | 41477 | 1.8168 |
| 1.4103 | 38.0 | 42598 | 1.8095 |
| 1.4168 | 39.0 | 43719 | 1.8233 |
| 1.4005 | 40.0 | 44840 | 1.8388 |
| 1.3799 | 41.0 | 45961 | 1.8235 |
| 1.3657 | 42.0 | 47082 | 1.8298 |
| 1.3559 | 43.0 | 48203 | 1.8165 |
| 1.3723 | 44.0 | 49324 | 1.8059 |
| 1.3535 | 45.0 | 50445 | 1.8451 |
| 1.3533 | 46.0 | 51566 | 1.8458 |
| 1.3469 | 47.0 | 52687 | 1.8237 |
| 1.3247 | 48.0 | 53808 | 1.8264 |
| 1.3142 | 49.0 | 54929 | 1.8209 |
| 1.2958 | 50.0 | 56050 | 1.8244 |
| 1.293 | 51.0 | 57171 | 1.8311 |
| 1.2784 | 52.0 | 58292 | 1.8287 |
| 1.2731 | 53.0 | 59413 | 1.8600 |
| 1.2961 | 54.0 | 60534 | 1.8086 |
| 1.2739 | 55.0 | 61655 | 1.8303 |
| 1.2716 | 56.0 | 62776 | 1.8214 |
| 1.2459 | 57.0 | 63897 | 1.8440 |
| 1.2492 | 58.0 | 65018 | 1.8503 |
| 1.2393 | 59.0 | 66139 | 1.8316 |
| 1.2077 | 60.0 | 67260 | 1.8283 |
| 1.2426 | 61.0 | 68381 | 1.8413 |
| 1.2032 | 62.0 | 69502 | 1.8461 |
| 1.2123 | 63.0 | 70623 | 1.8469 |
| 1.2069 | 64.0 | 71744 | 1.8478 |
| 1.198 | 65.0 | 72865 | 1.8479 |
| 1.1972 | 66.0 | 73986 | 1.8516 |
| 1.1885 | 67.0 | 75107 | 1.8341 |
| 1.1784 | 68.0 | 76228 | 1.8322 |
| 1.1866 | 69.0 | 77349 | 1.8559 |
| 1.1648 | 70.0 | 78470 | 1.8758 |
| 1.1595 | 71.0 | 79591 | 1.8684 |
| 1.1661 | 72.0 | 80712 | 1.8553 |
| 1.1478 | 73.0 | 81833 | 1.8658 |
| 1.1488 | 74.0 | 82954 | 1.8452 |
| 1.1538 | 75.0 | 84075 | 1.8505 |
| 1.1267 | 76.0 | 85196 | 1.8430 |
| 1.1339 | 77.0 | 86317 | 1.8333 |
| 1.118 | 78.0 | 87438 | 1.8419 |
| 1.12 | 79.0 | 88559 | 1.8669 |
| 1.1144 | 80.0 | 89680 | 1.8647 |
| 1.104 | 81.0 | 90801 | 1.8643 |
| 1.0864 | 82.0 | 91922 | 1.8528 |
| 1.0863 | 83.0 | 93043 | 1.8456 |
| 1.0912 | 84.0 | 94164 | 1.8509 |
| 1.0873 | 85.0 | 95285 | 1.8690 |
| 1.0862 | 86.0 | 96406 | 1.8577 |
| 1.0879 | 87.0 | 97527 | 1.8612 |
| 1.0783 | 88.0 | 98648 | 1.8410 |
| 1.0618 | 89.0 | 99769 | 1.8517 |
| 1.0552 | 90.0 | 100890 | 1.8459 |
| 1.0516 | 91.0 | 102011 | 1.8723 |
| 1.0424 | 92.0 | 103132 | 1.8832 |
| 1.0478 | 93.0 | 104253 | 1.8922 |
| 1.0523 | 94.0 | 105374 | 1.8753 |
| 1.027 | 95.0 | 106495 | 1.8625 |
| 1.0364 | 96.0 | 107616 | 1.8673 |
| 1.0203 | 97.0 | 108737 | 1.8806 |
| 1.0309 | 98.0 | 109858 | 1.8644 |
| 1.0174 | 99.0 | 110979 | 1.8659 |
| 1.0184 | 100.0 | 112100 | 1.8590 |
| 1.0234 | 101.0 | 113221 | 1.8614 |
| 1.013 | 102.0 | 114342 | 1.8866 |
| 1.0092 | 103.0 | 115463 | 1.8770 |
| 1.0051 | 104.0 | 116584 | 1.8445 |
| 1.0105 | 105.0 | 117705 | 1.8512 |
| 1.0233 | 106.0 | 118826 | 1.8896 |
| 0.9967 | 107.0 | 119947 | 1.8687 |
| 0.9795 | 108.0 | 121068 | 1.8618 |
| 0.9846 | 109.0 | 122189 | 1.8877 |
| 0.9958 | 110.0 | 123310 | 1.8522 |
| 0.9689 | 111.0 | 124431 | 1.8765 |
| 0.9879 | 112.0 | 125552 | 1.8692 |
| 0.99 | 113.0 | 126673 | 1.8689 |
| 0.9798 | 114.0 | 127794 | 1.8898 |
| 0.9676 | 115.0 | 128915 | 1.8782 |
| 0.9759 | 116.0 | 130036 | 1.8840 |
| 0.9576 | 117.0 | 131157 | 1.8662 |
| 0.9637 | 118.0 | 132278 | 1.8984 |
| 0.9645 | 119.0 | 133399 | 1.8872 |
| 0.9793 | 120.0 | 134520 | 1.8705 |
| 0.9643 | 121.0 | 135641 | 1.9036 |
| 0.961 | 122.0 | 136762 | 1.8683 |
| 0.9496 | 123.0 | 137883 | 1.8785 |
| 0.946 | 124.0 | 139004 | 1.8912 |
| 0.9681 | 125.0 | 140125 | 1.8837 |
| 0.9403 | 126.0 | 141246 | 1.8824 |
| 0.9452 | 127.0 | 142367 | 1.8824 |
| 0.9437 | 128.0 | 143488 | 1.8665 |
| 0.945 | 129.0 | 144609 | 1.8655 |
| 0.9453 | 130.0 | 145730 | 1.8695 |
| 0.9238 | 131.0 | 146851 | 1.8697 |
| 0.9176 | 132.0 | 147972 | 1.8618 |
| 0.9405 | 133.0 | 149093 | 1.8679 |
| 0.9184 | 134.0 | 150214 | 1.9025 |
| 0.9298 | 135.0 | 151335 | 1.9045 |
| 0.9215 | 136.0 | 152456 | 1.9014 |
| 0.9249 | 137.0 | 153577 | 1.8505 |
| 0.9246 | 138.0 | 154698 | 1.8542 |
| 0.9205 | 139.0 | 155819 | 1.8731 |
| 0.9368 | 140.0 | 156940 | 1.8673 |
| 0.9251 | 141.0 | 158061 | 1.8835 |
| 0.9224 | 142.0 | 159182 | 1.8727 |
| 0.9326 | 143.0 | 160303 | 1.8380 |
| 0.916 | 144.0 | 161424 | 1.8857 |
| 0.9361 | 145.0 | 162545 | 1.8547 |
| 0.9121 | 146.0 | 163666 | 1.8587 |
| 0.9156 | 147.0 | 164787 | 1.8863 |
| 0.9131 | 148.0 | 165908 | 1.8809 |
| 0.9185 | 149.0 | 167029 | 1.8734 |
| 0.9183 | 150.0 | 168150 | 1.8929 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.5.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_4_binary | elopezlopez | 2022-07-31T22:04:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-31T21:54:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_4_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_4_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2977
- F1: 0.8083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.3701 | 0.7903 |
| 0.4005 | 2.0 | 578 | 0.3669 | 0.7994 |
| 0.4005 | 3.0 | 867 | 0.5038 | 0.7955 |
| 0.1945 | 4.0 | 1156 | 0.6353 | 0.8006 |
| 0.1945 | 5.0 | 1445 | 0.8974 | 0.7826 |
| 0.0909 | 6.0 | 1734 | 0.8533 | 0.7764 |
| 0.0389 | 7.0 | 2023 | 0.9969 | 0.7957 |
| 0.0389 | 8.0 | 2312 | 1.0356 | 0.7952 |
| 0.0231 | 9.0 | 2601 | 1.1538 | 0.7963 |
| 0.0231 | 10.0 | 2890 | 1.2011 | 0.7968 |
| 0.0051 | 11.0 | 3179 | 1.2329 | 0.7935 |
| 0.0051 | 12.0 | 3468 | 1.2829 | 0.8056 |
| 0.0066 | 13.0 | 3757 | 1.2946 | 0.7956 |
| 0.004 | 14.0 | 4046 | 1.2977 | 0.8083 |
| 0.004 | 15.0 | 4335 | 1.3970 | 0.7957 |
| 0.0007 | 16.0 | 4624 | 1.3361 | 0.7917 |
| 0.0007 | 17.0 | 4913 | 1.5782 | 0.7954 |
| 0.0107 | 18.0 | 5202 | 1.4641 | 0.7900 |
| 0.0107 | 19.0 | 5491 | 1.4490 | 0.7957 |
| 0.0058 | 20.0 | 5780 | 1.4607 | 0.7932 |
| 0.0016 | 21.0 | 6069 | 1.5048 | 0.7939 |
| 0.0016 | 22.0 | 6358 | 1.5219 | 0.7945 |
| 0.0027 | 23.0 | 6647 | 1.4783 | 0.7937 |
| 0.0027 | 24.0 | 6936 | 1.4715 | 0.7981 |
| 0.0004 | 25.0 | 7225 | 1.4989 | 0.7900 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
oMateos2020/pegasus-newsroom-cnn_full-adam8bit | oMateos2020 | 2022-07-31T21:02:17Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-29T07:55:23Z | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-newsroom-cnn_full-adam8bit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-newsroom-cnn_full-adam8bit
This model is a fine-tuned version of [google/pegasus-newsroom](https://huggingface.co/google/pegasus-newsroom) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.9826
- eval_rouge1: 38.2456
- eval_rouge2: 17.3966
- eval_rougeL: 26.9273
- eval_rougeLsum: 35.3265
- eval_gen_len: 69.658
- eval_runtime: 13626.7467
- eval_samples_per_second: 0.183
- eval_steps_per_second: 0.012
- epoch: 0.22
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0016
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DS-20202/DoubleHardDebias | DS-20202 | 2022-07-31T20:32:45Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-07-31T12:08:09Z | ---
title: Double Hard Debiasing
emoji: 👁
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 3.1.1
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
neuralmagic/oBERT-teacher-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 396 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T13:47:26Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# SQuADv1 teacher
This model is used as a teacher for all runs on the SQuADv1 downstream task in the paper [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
SQuADv1 dev-set:
```
EM = 81.41
F1 = 88.54
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-teacher-qqp | neuralmagic | 2022-07-31T19:52:34Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T13:55:22Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# QQP teacher
This model is used as a teacher for all runs on the QQP downstream task in the paper [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
QQP dev-set:
```
accuracy = 91.06
F1 = 88.00
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T14:00:31Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 77.65
F1 = 85.34
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T14:00:18Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 80% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 79.55
F1 = 87.00
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-pruned-block4-80-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:34Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T19:20:49Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 78.28
F1 = 86.10
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T14:01:27Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 74.07
F1 = 82.79
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-6-downstream-dense-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T19:20:36Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-dense-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity - QAT`, and it represents an upper bound for performance of the corresponding pruned and quantized models:
- 80% unstructured QAT: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-QAT-squadv1`
- 80% block-4 QAT: `neuralmagic/oBERT-6-downstream-pruned-block4-80-QAT-squadv1`
- 90% unstructured QAT: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-QAT-squadv1`
- 90% block-4 QAT: `neuralmagic/oBERT-6-downstream-pruned-block4-90-QAT-squadv1`
SQuADv1 dev-set:
```
EM = 80.85
F1 = 87.94
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-90-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T19:21:41Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 70.00
F1 = 79.66
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T14:01:41Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 71.36
F1 = 80.69
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-block4-80-QAT-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T19:21:28Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 72.70
F1 = 82.04
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1 | neuralmagic | 2022-07-31T19:52:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
]
| null | 2022-05-25T14:01:00Z | ---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-unstructured-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 75.62
F1 = 84.08
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
``` |
Subsets and Splits