modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
masakhane/byt5_en_zul_news | masakhane | 2022-09-24T15:05:20Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:02:52Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/byt5_zul_en_news | masakhane | 2022-09-24T15:05:19Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:03:09Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mbart50_zul_en_news | masakhane | 2022-09-24T15:05:19Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:04:09Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mt5_zul_en_news | masakhane | 2022-09-24T15:05:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:06:24Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/mbart50_en_zul_news | masakhane | 2022-09-24T15:05:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:04:24Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_news | masakhane | 2022-09-24T15:05:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:09:23Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_news | masakhane | 2022-09-24T15:05:16Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:07:50Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel_news_ft | masakhane | 2022-09-24T15:05:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:13:41Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel_ft | masakhane | 2022-09-24T15:05:13Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:15:36Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_zul_en_rel | masakhane | 2022-09-24T15:05:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"zul",
"en",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:18:45Z | ---
language:
- zul
- en
license: afl-3.0
---
|
masakhane/m2m100_418M_en_zul_rel | masakhane | 2022-09-24T15:05:12Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"zul",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T09:18:27Z | ---
language:
- en
- zul
license: afl-3.0
---
|
masakhane/m2m100_418M_en_kin_rel | masakhane | 2022-09-24T15:05:09Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"kin",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:07:12Z | ---
language:
- en
- kin
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_kin_en_rel | masakhane | 2022-09-24T15:05:09Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"kin",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:06:42Z | ---
language:
- kin
- en
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_nya_rel | masakhane | 2022-09-24T15:05:08Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"nya",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:07:46Z | ---
language:
- en
- nya
license: cc-by-nc-4.0
---
|
masakhane/m2m100_418M_en_sna_rel | masakhane | 2022-09-24T15:05:07Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"sna",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-08-25T22:08:45Z | ---
language:
- en
- sna
license: cc-by-nc-4.0
---
|
gokuls/BERT-tiny-Massive-intent | gokuls | 2022-09-24T14:26:13Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T14:15:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: BERT-tiny-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8475159862272503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-tiny-Massive-intent
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6740
- Accuracy: 0.8475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.6104 | 1.0 | 720 | 3.0911 | 0.3601 |
| 2.8025 | 2.0 | 1440 | 2.3800 | 0.5165 |
| 2.2292 | 3.0 | 2160 | 1.9134 | 0.5991 |
| 1.818 | 4.0 | 2880 | 1.5810 | 0.6744 |
| 1.5171 | 5.0 | 3600 | 1.3522 | 0.7108 |
| 1.2876 | 6.0 | 4320 | 1.1686 | 0.7442 |
| 1.1049 | 7.0 | 5040 | 1.0355 | 0.7683 |
| 0.9623 | 8.0 | 5760 | 0.9466 | 0.7885 |
| 0.8424 | 9.0 | 6480 | 0.8718 | 0.7875 |
| 0.7473 | 10.0 | 7200 | 0.8107 | 0.8028 |
| 0.6735 | 11.0 | 7920 | 0.7710 | 0.8180 |
| 0.6085 | 12.0 | 8640 | 0.7404 | 0.8210 |
| 0.5536 | 13.0 | 9360 | 0.7180 | 0.8229 |
| 0.5026 | 14.0 | 10080 | 0.6980 | 0.8318 |
| 0.4652 | 15.0 | 10800 | 0.6970 | 0.8337 |
| 0.4234 | 16.0 | 11520 | 0.6822 | 0.8372 |
| 0.3987 | 17.0 | 12240 | 0.6691 | 0.8436 |
| 0.3707 | 18.0 | 12960 | 0.6679 | 0.8455 |
| 0.3433 | 19.0 | 13680 | 0.6740 | 0.8475 |
| 0.3206 | 20.0 | 14400 | 0.6760 | 0.8451 |
| 0.308 | 21.0 | 15120 | 0.6704 | 0.8436 |
| 0.2813 | 22.0 | 15840 | 0.6701 | 0.8416 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
gokuls/distilroberta-emotion-intent | gokuls | 2022-09-24T13:36:17Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T13:26:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilroberta-emotion-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-emotion-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1496
- Accuracy: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4501 | 1.0 | 1000 | 0.2432 | 0.924 |
| 0.1947 | 2.0 | 2000 | 0.1646 | 0.934 |
| 0.1497 | 3.0 | 3000 | 0.1382 | 0.9405 |
| 0.1316 | 4.0 | 4000 | 0.1496 | 0.9435 |
| 0.1145 | 5.0 | 5000 | 0.1684 | 0.9385 |
| 0.1 | 6.0 | 6000 | 0.2342 | 0.943 |
| 0.0828 | 7.0 | 7000 | 0.2807 | 0.939 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
RebekkaB/rlt_2409_1450 | RebekkaB | 2022-09-24T13:22:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T12:52:36Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rlt_2409_1450
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlt_2409_1450
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- F1: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 36 | 0.5165 | 0.8542 |
| No log | 1.99 | 72 | 0.1459 | 0.9599 |
| No log | 2.99 | 108 | 0.0733 | 0.9882 |
| No log | 3.99 | 144 | 0.1385 | 0.9502 |
| No log | 4.99 | 180 | 0.0948 | 0.9806 |
| No log | 5.99 | 216 | 0.0699 | 0.9822 |
| No log | 6.99 | 252 | 0.0582 | 0.9859 |
| No log | 7.99 | 288 | 0.0340 | 0.9933 |
| No log | 8.99 | 324 | 0.0475 | 0.9826 |
| No log | 9.99 | 360 | 0.0518 | 0.9826 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
SaurabhKaushik/distilbert-base-uncased-finetuned-ner | SaurabhKaushik | 2022-09-24T12:38:00Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-24T11:26:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9250386398763524
- name: Recall
type: recall
value: 0.9373531714956931
- name: F1
type: f1
value: 0.9311551925320887
- name: Accuracy
type: accuracy
value: 0.9839388692074285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
- Precision: 0.9250
- Recall: 0.9374
- F1: 0.9312
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2343 | 1.0 | 878 | 0.0674 | 0.9177 | 0.9233 | 0.9205 | 0.9818 |
| 0.0525 | 2.0 | 1756 | 0.0582 | 0.9245 | 0.9362 | 0.9304 | 0.9837 |
| 0.0288 | 3.0 | 2634 | 0.0589 | 0.9250 | 0.9374 | 0.9312 | 0.9839 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/dr-strange | sd-concepts-library | 2022-09-24T12:11:20Z | 0 | 28 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T12:11:16Z | ---
license: mit
---
### <dr-strange> on Stable Diffusion
This is the `<dr-strange>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
RebekkaB/san_nli_2409_1325 | RebekkaB | 2022-09-24T11:50:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-24T11:27:27Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: san_nli_2409_1325
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# san_nli_2409_1325
This model is a fine-tuned version of [svalabs/gbert-large-zeroshot-nli](https://huggingface.co/svalabs/gbert-large-zeroshot-nli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3856
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.93 | 10 | 0.2410 | 0.9219 |
| No log | 1.93 | 20 | 0.5240 | 0.9149 |
| No log | 2.93 | 30 | 0.4756 | 0.9219 |
| No log | 3.93 | 40 | 0.3856 | 0.9219 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/cz_binance | huggingtweets | 2022-09-24T09:16:00Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-06-05T21:10:34Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cz_binance/1664010956441/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572269909513478146/dfyw817W_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">CZ 🔶 Binance</div>
<div style="text-align: center; font-size: 14px;">@cz_binance</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from CZ 🔶 Binance.
| Data | CZ 🔶 Binance |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 149 |
| Short tweets | 473 |
| Tweets kept | 2624 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/19171g9o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cz_binance's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ngvvhd8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cz_binance')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/coop-himmelblau | sd-concepts-library | 2022-09-24T09:06:36Z | 0 | 6 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T09:06:32Z | ---
license: mit
---
### coop himmelblau on Stable Diffusion
This is the `<coop himmelblau>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
aniketface/DialoGPT-product | aniketface | 2022-09-24T09:05:12Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"convAI",
"conversational",
"facebook",
"en",
"dataset:blended_skill_talk",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T08:41:37Z | ---
language:
- en
thumbnail:
tags:
- convAI
- conversational
- facebook
license: apache-2.0
datasets:
- blended_skill_talk
metrics:
- perplexity
--- |
huggingtweets/pentosh1 | huggingtweets | 2022-09-24T08:03:41Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T08:02:41Z | ---
language: en
thumbnail: http://www.huggingtweets.com/pentosh1/1664006616559/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1553520707472072708/5eseDj4F_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pentoshi 🐧</div>
<div style="text-align: center; font-size: 14px;">@pentosh1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pentoshi 🐧.
| Data | Pentoshi 🐧 |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 24 |
| Short tweets | 573 |
| Tweets kept | 2645 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3kzanxqd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pentosh1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3e7vuikz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3e7vuikz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pentosh1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/beranewsnetwork | huggingtweets | 2022-09-24T07:04:15Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T07:01:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/beranewsnetwork/1664003049616/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1445950504102735872/bCnvrgeb_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bera News Network</div>
<div style="text-align: center; font-size: 14px;">@beranewsnetwork</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bera News Network.
| Data | Bera News Network |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 1 |
| Short tweets | 579 |
| Tweets kept | 2670 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/254oa32x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beranewsnetwork's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jqeuf1y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/beranewsnetwork')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/it_airmass | huggingtweets | 2022-09-24T06:49:38Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T06:49:12Z | ---
language: en
thumbnail: http://www.huggingtweets.com/it_airmass/1664002173554/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529248676647944193/-N1UKgKg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Airmass</div>
<div style="text-align: center; font-size: 14px;">@it_airmass</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Airmass.
| Data | Airmass |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 126 |
| Short tweets | 370 |
| Tweets kept | 2753 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2f99nys0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @it_airmass's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nvbqf9p2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/it_airmass')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/marketsmeowmeow | huggingtweets | 2022-09-24T06:43:25Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-24T06:42:56Z | ---
language: en
thumbnail: http://www.huggingtweets.com/marketsmeowmeow/1664001800470/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1570418907575377921/1mTVqZQZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">RB</div>
<div style="text-align: center; font-size: 14px;">@marketsmeowmeow</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from RB.
| Data | RB |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 14 |
| Short tweets | 700 |
| Tweets kept | 2530 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/a7yqyg23/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @marketsmeowmeow's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ou0r1v87) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ou0r1v87/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/marketsmeowmeow')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/museum-by-coop-himmelblau | sd-concepts-library | 2022-09-24T06:39:31Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T06:39:25Z | ---
license: mit
---
### museum by coop himmelblau on Stable Diffusion
This is the `<coop himmelblau museum>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/ransom | sd-concepts-library | 2022-09-24T05:44:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-24T05:44:07Z | ---
license: mit
---
### ransom on Stable Diffusion
This is the `<ransom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:








|
ckiplab/bert-base-chinese-qa | ckiplab | 2022-09-24T05:25:07Z | 162 | 7 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"zh",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-09-24T05:17:36Z | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- question-answering
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Base Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-qa')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
huggingtweets/tim_cook | huggingtweets | 2022-09-24T01:11:00Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/tim_cook/1663981855625/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1535420431766671360/Pwq-1eJc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tim Cook</div>
<div style="text-align: center; font-size: 14px;">@tim_cook</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tim Cook.
| Data | Tim Cook |
| --- | --- |
| Tweets downloaded | 1385 |
| Retweets | 20 |
| Short tweets | 13 |
| Tweets kept | 1352 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2d94dtsh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tim_cook's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19bm0x3l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19bm0x3l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tim_cook')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
farleyknight/arxiv-summarization-t5-base-2022-09-21 | farleyknight | 2022-09-24T00:31:57Z | 180 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:ccdv/arxiv-summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-21T20:31:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ccdv/arxiv-summarization
metrics:
- rouge
model-index:
- name: arxiv-summarization-t5-base-2022-09-21
results:
- task:
name: Summarization
type: summarization
dataset:
name: ccdv/arxiv-summarization
type: ccdv/arxiv-summarization
config: section
split: train
args: section
metrics:
- name: Rouge1
type: rouge
value: 40.6781
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arxiv-summarization-t5-base-2022-09-21
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the ccdv/arxiv-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8650
- Rouge1: 40.6781
- Rouge2: 14.7167
- Rougel: 26.6375
- Rougelsum: 35.5959
- Gen Len: 117.1969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.3291 | 0.05 | 10000 | 2.1906 | 18.6571 | 7.1341 | 14.8347 | 16.9545 | 19.0 |
| 2.2454 | 0.1 | 20000 | 2.1549 | 18.5037 | 7.1908 | 14.7141 | 16.8233 | 18.9997 |
| 2.2107 | 0.15 | 30000 | 2.1013 | 18.7638 | 7.326 | 14.9437 | 17.072 | 19.0 |
| 2.1486 | 0.2 | 40000 | 2.0845 | 18.6879 | 7.2441 | 14.8835 | 16.983 | 19.0 |
| 2.158 | 0.25 | 50000 | 2.0699 | 18.8314 | 7.3712 | 15.0166 | 17.1215 | 19.0 |
| 2.1476 | 0.3 | 60000 | 2.0424 | 18.9783 | 7.4138 | 15.1121 | 17.2778 | 18.9981 |
| 2.1164 | 0.34 | 70000 | 2.0349 | 18.9257 | 7.4649 | 15.0335 | 17.1819 | 19.0 |
| 2.079 | 0.39 | 80000 | 2.0208 | 18.643 | 7.4096 | 14.8927 | 16.9786 | 18.9994 |
| 2.101 | 0.44 | 90000 | 2.0113 | 19.3881 | 7.7012 | 15.3981 | 17.6516 | 19.0 |
| 2.0576 | 0.49 | 100000 | 2.0022 | 18.9985 | 7.542 | 15.1157 | 17.2972 | 18.9992 |
| 2.0983 | 0.54 | 110000 | 1.9941 | 18.7691 | 7.4625 | 15.0256 | 17.1146 | 19.0 |
| 2.053 | 0.59 | 120000 | 1.9855 | 19.002 | 7.5602 | 15.1497 | 17.2963 | 19.0 |
| 2.0434 | 0.64 | 130000 | 1.9786 | 19.2385 | 7.6533 | 15.3094 | 17.5439 | 18.9994 |
| 2.0354 | 0.69 | 140000 | 1.9746 | 19.184 | 7.7307 | 15.2897 | 17.491 | 18.9992 |
| 2.0347 | 0.74 | 150000 | 1.9639 | 19.2408 | 7.693 | 15.3357 | 17.5297 | 19.0 |
| 2.0236 | 0.79 | 160000 | 1.9590 | 19.0781 | 7.6256 | 15.1932 | 17.3486 | 18.9998 |
| 2.0187 | 0.84 | 170000 | 1.9532 | 19.0343 | 7.6792 | 15.1884 | 17.3519 | 19.0 |
| 1.9939 | 0.89 | 180000 | 1.9485 | 18.8247 | 7.5005 | 15.0246 | 17.1485 | 18.9998 |
| 1.9961 | 0.94 | 190000 | 1.9504 | 19.0695 | 7.6559 | 15.2139 | 17.3814 | 19.0 |
| 2.0197 | 0.99 | 200000 | 1.9399 | 19.2821 | 7.6685 | 15.3029 | 17.5374 | 18.9988 |
| 1.9457 | 1.03 | 210000 | 1.9350 | 19.053 | 7.6502 | 15.2123 | 17.3793 | 19.0 |
| 1.9552 | 1.08 | 220000 | 1.9317 | 19.1878 | 7.7235 | 15.3272 | 17.5252 | 18.9998 |
| 1.9772 | 1.13 | 230000 | 1.9305 | 19.0855 | 7.6303 | 15.1943 | 17.3942 | 18.9997 |
| 1.9171 | 1.18 | 240000 | 1.9291 | 19.0711 | 7.6437 | 15.2175 | 17.3893 | 18.9995 |
| 1.9393 | 1.23 | 250000 | 1.9230 | 19.276 | 7.725 | 15.3826 | 17.586 | 18.9995 |
| 1.9295 | 1.28 | 260000 | 1.9197 | 19.2999 | 7.7958 | 15.3961 | 17.6056 | 18.9975 |
| 1.9725 | 1.33 | 270000 | 1.9173 | 19.2958 | 7.7121 | 15.3659 | 17.584 | 19.0 |
| 1.9668 | 1.38 | 280000 | 1.9129 | 19.089 | 7.6846 | 15.2395 | 17.3879 | 18.9998 |
| 1.941 | 1.43 | 290000 | 1.9132 | 19.2127 | 7.7336 | 15.311 | 17.4742 | 18.9995 |
| 1.9427 | 1.48 | 300000 | 1.9108 | 19.217 | 7.7591 | 15.334 | 17.53 | 18.9998 |
| 1.9521 | 1.53 | 310000 | 1.9041 | 19.1285 | 7.6736 | 15.2625 | 17.458 | 19.0 |
| 1.9352 | 1.58 | 320000 | 1.9041 | 19.1656 | 7.723 | 15.3035 | 17.4818 | 18.9991 |
| 1.9342 | 1.63 | 330000 | 1.9004 | 19.2573 | 7.7766 | 15.3558 | 17.5382 | 19.0 |
| 1.9631 | 1.68 | 340000 | 1.8978 | 19.236 | 7.7584 | 15.3408 | 17.4993 | 18.9998 |
| 1.8987 | 1.72 | 350000 | 1.8968 | 19.1716 | 7.7231 | 15.2836 | 17.4655 | 18.9997 |
| 1.9433 | 1.77 | 360000 | 1.8924 | 19.2644 | 7.8294 | 15.4018 | 17.5808 | 18.9998 |
| 1.9159 | 1.82 | 370000 | 1.8912 | 19.1833 | 7.8267 | 15.3175 | 17.4918 | 18.9995 |
| 1.9516 | 1.87 | 380000 | 1.8856 | 19.3077 | 7.7432 | 15.3723 | 17.6115 | 19.0 |
| 1.9218 | 1.92 | 390000 | 1.8880 | 19.2668 | 7.8231 | 15.3834 | 17.5701 | 18.9994 |
| 1.9159 | 1.97 | 400000 | 1.8860 | 19.2224 | 7.7903 | 15.3488 | 17.4992 | 18.9997 |
| 1.8741 | 2.02 | 410000 | 1.8854 | 19.2572 | 7.741 | 15.3405 | 17.5351 | 19.0 |
| 1.8668 | 2.07 | 420000 | 1.8854 | 19.3658 | 7.8593 | 15.4418 | 17.656 | 18.9995 |
| 1.8638 | 2.12 | 430000 | 1.8831 | 19.305 | 7.8218 | 15.3843 | 17.5861 | 18.9997 |
| 1.8334 | 2.17 | 440000 | 1.8817 | 19.3269 | 7.8249 | 15.4231 | 17.5958 | 18.9994 |
| 1.8893 | 2.22 | 450000 | 1.8803 | 19.2949 | 7.7885 | 15.3947 | 17.585 | 18.9997 |
| 1.8929 | 2.27 | 460000 | 1.8783 | 19.291 | 7.8346 | 15.428 | 17.5797 | 18.9997 |
| 1.861 | 2.32 | 470000 | 1.8766 | 19.4284 | 7.8832 | 15.4746 | 17.6946 | 18.9997 |
| 1.8719 | 2.37 | 480000 | 1.8751 | 19.1525 | 7.7641 | 15.3348 | 17.47 | 18.9998 |
| 1.8889 | 2.41 | 490000 | 1.8742 | 19.1743 | 7.768 | 15.3292 | 17.4665 | 18.9998 |
| 1.8834 | 2.46 | 500000 | 1.8723 | 19.3069 | 7.7935 | 15.3987 | 17.5913 | 18.9998 |
| 1.8564 | 2.51 | 510000 | 1.8695 | 19.3217 | 7.8292 | 15.4063 | 17.6081 | 19.0 |
| 1.8706 | 2.56 | 520000 | 1.8697 | 19.294 | 7.8217 | 15.3964 | 17.581 | 18.9998 |
| 1.883 | 2.61 | 530000 | 1.8703 | 19.2784 | 7.8634 | 15.404 | 17.5942 | 18.9995 |
| 1.8622 | 2.66 | 540000 | 1.8677 | 19.3165 | 7.8378 | 15.4259 | 17.6064 | 18.9988 |
| 1.8781 | 2.71 | 550000 | 1.8676 | 19.3237 | 7.7954 | 15.3995 | 17.6008 | 19.0 |
| 1.8793 | 2.76 | 560000 | 1.8685 | 19.2141 | 7.7605 | 15.3345 | 17.5268 | 18.9997 |
| 1.8795 | 2.81 | 570000 | 1.8675 | 19.2694 | 7.8082 | 15.3996 | 17.5831 | 19.0 |
| 1.8425 | 2.86 | 580000 | 1.8659 | 19.2886 | 7.7987 | 15.4005 | 17.5859 | 18.9997 |
| 1.8605 | 2.91 | 590000 | 1.8650 | 19.2778 | 7.7934 | 15.3931 | 17.5809 | 18.9997 |
| 1.8448 | 2.96 | 600000 | 1.8655 | 19.2884 | 7.8087 | 15.4025 | 17.5856 | 19.0 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.0
- Datasets 2.5.1
- Tokenizers 0.13.0
|
ericntay/stbl_clinical_bert_ft_rs5 | ericntay | 2022-09-23T20:39:56Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-23T20:21:55Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs5
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0936
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2723 | 1.0 | 101 | 0.0875 | 0.8479 |
| 0.066 | 2.0 | 202 | 0.0688 | 0.9002 |
| 0.0328 | 3.0 | 303 | 0.0668 | 0.9070 |
| 0.0179 | 4.0 | 404 | 0.0689 | 0.9129 |
| 0.0098 | 5.0 | 505 | 0.0790 | 0.9147 |
| 0.0069 | 6.0 | 606 | 0.0805 | 0.9205 |
| 0.0033 | 7.0 | 707 | 0.0835 | 0.9268 |
| 0.0022 | 8.0 | 808 | 0.0904 | 0.9262 |
| 0.0021 | 9.0 | 909 | 0.0882 | 0.9263 |
| 0.0015 | 10.0 | 1010 | 0.0933 | 0.9289 |
| 0.0009 | 11.0 | 1111 | 0.0921 | 0.9311 |
| 0.0009 | 12.0 | 1212 | 0.0936 | 0.9268 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
subtlegradient/distilbert-base-uncased-finetuned-cola | subtlegradient | 2022-09-23T19:19:56Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T19:08:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5180
- eval_matthews_correlation: 0.4063
- eval_runtime: 0.8532
- eval_samples_per_second: 1222.419
- eval_steps_per_second: 77.353
- epoch: 1.0
- step: 535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu116
- Datasets 2.5.1
- Tokenizers 0.12.1
|
g30rv17ys/ddpm-geeve-drusen-1000-200ep | g30rv17ys | 2022-09-23T19:12:36Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:39:11Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-drusen-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-drusen-1000-200ep/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-cnv-1000-200ep | g30rv17ys | 2022-09-23T19:10:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:29:54Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1000-200ep/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-dme-1000-200ep | g30rv17ys | 2022-09-23T19:09:23Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:34:37Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-dme-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-dme-1000-200ep/tensorboard?#scalars)
|
tszocinski/bart-base-squad-question-generation | tszocinski | 2022-09-23T18:43:43Z | 75 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-22T19:36:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tszocinski/bart-base-squad-question-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tszocinski/bart-base-squad-question-generation
This model is a fine-tuned version of [tszocinski/bart-base-squad-question-generation](https://huggingface.co/tszocinski/bart-base-squad-question-generation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.5656
- Validation Loss: 11.1958
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'RMSprop', 'config': {'name': 'RMSprop', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.5656 | 11.1958 | 0 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
tkuye/t5-ost | tkuye | 2022-09-23T18:40:53Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T17:10:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-ost
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-ost
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5695 | 0.39 | 500 | 0.0591 |
| 0.0606 | 0.77 | 1000 | 0.0588 |
| 0.0575 | 1.16 | 1500 | 0.0588 |
| 0.0551 | 1.55 | 2000 | 0.0586 |
| 0.0549 | 1.93 | 2500 | 0.0581 |
| 0.0487 | 2.32 | 3000 | 0.0597 |
| 0.0478 | 2.71 | 3500 | 0.0594 |
| 0.0463 | 3.1 | 4000 | 0.0624 |
| 0.0404 | 3.48 | 4500 | 0.0625 |
| 0.041 | 3.87 | 5000 | 0.0617 |
| 0.0366 | 4.26 | 5500 | 0.0656 |
| 0.0347 | 4.64 | 6000 | 0.0658 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli | mfreihaut | 2022-09-23T18:20:23Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-21T18:05:28Z | ---
tags:
- generated_from_trainer
model-index:
- name: iab_classification-finetuned-mnli-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iab_classification-finetuned-mnli-finetuned-mnli
This model is a fine-tuned version of [mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli](https://huggingface.co/mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 250 | 1.5956 |
| 0.9361 | 2.0 | 500 | 0.0409 |
| 0.9361 | 3.0 | 750 | 2.9853 |
| 0.7634 | 4.0 | 1000 | 0.1317 |
| 0.7634 | 5.0 | 1250 | 0.4056 |
| 0.611 | 6.0 | 1500 | 1.8038 |
| 0.611 | 7.0 | 1750 | 0.6305 |
| 0.5627 | 8.0 | 2000 | 0.6923 |
| 0.5627 | 9.0 | 2250 | 3.7410 |
| 0.9863 | 10.0 | 2500 | 2.1912 |
| 0.9863 | 11.0 | 2750 | 1.5405 |
| 1.0197 | 12.0 | 3000 | 1.9271 |
| 1.0197 | 13.0 | 3250 | 1.1741 |
| 0.5186 | 14.0 | 3500 | 1.1864 |
| 0.5186 | 15.0 | 3750 | 0.7945 |
| 0.4042 | 16.0 | 4000 | 1.0645 |
| 0.4042 | 17.0 | 4250 | 1.8826 |
| 0.3637 | 18.0 | 4500 | 0.3234 |
| 0.3637 | 19.0 | 4750 | 0.2641 |
| 0.3464 | 20.0 | 5000 | 0.8596 |
| 0.3464 | 21.0 | 5250 | 0.5601 |
| 0.2449 | 22.0 | 5500 | 0.4543 |
| 0.2449 | 23.0 | 5750 | 1.1986 |
| 0.2595 | 24.0 | 6000 | 0.3642 |
| 0.2595 | 25.0 | 6250 | 1.3606 |
| 0.298 | 26.0 | 6500 | 0.8154 |
| 0.298 | 27.0 | 6750 | 1.1105 |
| 0.1815 | 28.0 | 7000 | 0.7443 |
| 0.1815 | 29.0 | 7250 | 0.2616 |
| 0.165 | 30.0 | 7500 | 0.5318 |
| 0.165 | 31.0 | 7750 | 0.7608 |
| 0.1435 | 32.0 | 8000 | 0.9647 |
| 0.1435 | 33.0 | 8250 | 1.3749 |
| 0.1516 | 34.0 | 8500 | 0.7167 |
| 0.1516 | 35.0 | 8750 | 0.5426 |
| 0.1359 | 36.0 | 9000 | 0.7225 |
| 0.1359 | 37.0 | 9250 | 0.5453 |
| 0.1266 | 38.0 | 9500 | 0.4825 |
| 0.1266 | 39.0 | 9750 | 0.7271 |
| 0.1153 | 40.0 | 10000 | 0.9044 |
| 0.1153 | 41.0 | 10250 | 1.0363 |
| 0.1175 | 42.0 | 10500 | 0.7987 |
| 0.1175 | 43.0 | 10750 | 0.7596 |
| 0.1089 | 44.0 | 11000 | 0.8637 |
| 0.1089 | 45.0 | 11250 | 0.8327 |
| 0.1092 | 46.0 | 11500 | 0.7161 |
| 0.1092 | 47.0 | 11750 | 0.7768 |
| 0.1068 | 48.0 | 12000 | 0.9059 |
| 0.1068 | 49.0 | 12250 | 0.8829 |
| 0.1045 | 50.0 | 12500 | 0.8711 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
carbon225/transforchess-bart-base | carbon225 | 2022-09-23T18:13:23Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-07-31T18:02:04Z | ---
license: cc0-1.0
widget:
- text: " White pawn to d4. Black knight to f6."
---
|
sd-concepts-library/sintez-ico | sd-concepts-library | 2022-09-23T18:13:17Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-23T18:13:03Z | ---
license: mit
---
### sintez-ico on Stable Diffusion
This is the `<sintez-ico>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








|
Eulering/moonlight-night | Eulering | 2022-09-23T14:47:20Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2022-09-23T14:47:20Z | ---
license: bigscience-openrail-m
---
|
gokuls/bert-base-Massive-intent | gokuls | 2022-09-23T14:26:09Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T13:38:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: bert-base-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8858829316281358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-Massive-intent
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6707
- Accuracy: 0.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6844 | 1.0 | 720 | 0.7190 | 0.8387 |
| 0.4713 | 2.0 | 1440 | 0.5449 | 0.8726 |
| 0.2459 | 3.0 | 2160 | 0.5893 | 0.8790 |
| 0.1469 | 4.0 | 2880 | 0.6631 | 0.8795 |
| 0.0874 | 5.0 | 3600 | 0.6707 | 0.8859 |
| 0.0507 | 6.0 | 4320 | 0.7189 | 0.8844 |
| 0.0344 | 7.0 | 5040 | 0.7480 | 0.8854 |
| 0.0225 | 8.0 | 5760 | 0.7956 | 0.8844 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
bhumikak/resultsb | bhumikak | 2022-09-23T14:21:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T13:46:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: resultsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resultsb
This model is a fine-tuned version of [bhumikak/resultsa](https://huggingface.co/bhumikak/resultsa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8957
- Rouge2 Precision: 0.2127
- Rouge2 Recall: 0.2605
- Rouge2 Fmeasure: 0.2167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 50
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Yousef-Cot/distilbert-base-uncased-finetuned-emotion | Yousef-Cot | 2022-09-23T13:21:28Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T07:18:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9218038766645168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3311 | 0.8965 | 0.8931 |
| 0.254 | 2.0 | 500 | 0.2201 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.4.0
- Tokenizers 0.11.6
|
combi2k2/MRC001 | combi2k2 | 2022-09-23T13:01:39Z | 0 | 0 | null | [
"vi",
"xlm-roberta",
"dataset:UIT-ViQuAD2.0",
"license:mit",
"region:us"
]
| null | 2022-09-18T03:23:55Z | ---
language: vi
tags:
- vi
- xlm-roberta
widget:
- text: Toà nhà nào cao nhất Việt Nam?
context: Landmark 81 là một toà nhà chọc trời trong tổ hợp dự án Vinhomes Tân Cảng,
một dự án có tổng mức đầu tư 40.000 tỷ đồng, do Công ty Cổ phần Đầu tư xây dựng
Tân Liên Phát thuộc Vingroup làm chủ đầu tư. Toà tháp cao 81 tầng, hiện tại là
toà nhà cao nhất Việt Nam và là toà nhà cao nhất Đông Nam Á từ tháng 3 năm 2018.
datasets:
- UIT-ViQuAD2.0
license: mit
metrics:
- f1
- em
---
# Machine Reading Comprehension Vietnamese
**[Colab Notebook](https://colab.research.google.com/drive/1JeyjSluVLIoZGzC_kOq6HXGUX-JMN3VP?usp=sharing)**
## Overview
- Language model: xlm-roberta-base
- Language: Vietnamese
- Downstream-task: Extractive QA
- Dataset: [UIT-ViQuAD2.0](https://paperswithcode.com/dataset/uit-viquad)
- Dataset Format: SQuAD 2.0
- Infrastructure: cuda Tesla P100-PCIE-16GB (Google Colab)
## Requirements
The following modules are essential for running the trainer:
- **transformers**
- **datasets**
- **evaluate**
- **numpy**
Run the following commands to install the required libraries:
```
>>> pip install datasets evaluate numpy
>>> pip install git+https://github.com/huggingface/transformers
```
## Hyperparameter
```
batch_size = 16
n_epochs = 10
base_LM_model = "xlm-roberta-base"
max_seq_len = 256
learning_rate = 2e-5
weight_decay = 0.01
```
## Performance
Evaluated on the UIT-ViQuAD2.0 dev set with the official eval script.
```
'exact': 29.947276,
'f1': 43.627568,
'total': 2845,
'HasAns_exact': 43.827160,
'HasAns_f1': 63.847958,
'HasAns_total': 1944,
'NoAns_exact': 0.0,
'NoAns_f1': 0.0,
'NoAns_total': 901
```
## Usage
```python
from transformers import {
AutoModelForQuestionAnswering,
AutoTokenizer,
pipeline
}
model_checkpoint = "results/checkpoint-16000"
question_answerer = pipeline("question-answering", model = model_checkpoint)
# a) get predictions
QA_input = {
'question': 'Hiến pháp Mali quy định thế nào đối với tôn giáo?',
'context': 'Ước tính có khoảng 90% dân số Mali theo đạo Hồi (phần lớn là hệ phái Sunni), khoảng 5% là theo Kitô giáo (khoảng hai phần ba theo Giáo hội Công giáo Rôma và một phần ba là theo Tin Lành) và 5% còn lại theo các tín ngưỡng vật linh truyền thống bản địa. Một số ít người Mali theo thuyết vô thần và thuyết bất khả tri, phần lớn họ thực hiện những nghi lễ tôn giáo cơ bản hằng ngày. Các phong tục Hồi giáo ở Mali có mức độ vừa phải, khoan dung, và đã thay đổi theo các điều kiện của địa phương; các mối quan hệ giữa người Hồi giáo và các cộng đồng tôn giáo nhỏ khác nói chung là thân thiện. Hiến pháp của Mali đã quy định một thể chế nhà nước thế tục và ủng hộ quyền tự do tôn giáo, và chính phủ Mali phải đảm bảo quyền này.'
}
res = question_answerer(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
```
## Author
Duc Nguyen
## Citation
```
Kiet Van Nguyen, Son Quoc Tran, Luan Thanh Nguyen, Tin Van Huynh, Son T. Luu, Ngan Luu-Thuy Nguyen. "VLSP 2021 Shared Task: Vietnamese Machine Reading Comprehension." The 8th International Workshop on Vietnamese Language and Speech Processing (VLSP 2021) .
```
|
huggingtweets/cushbomb | huggingtweets | 2022-09-23T12:40:19Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cushbomb/1663936814713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560517790900969473/MPbfc6w2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">matt christman</div>
<div style="text-align: center; font-size: 14px;">@cushbomb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from matt christman.
| Data | matt christman |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 241 |
| Short tweets | 685 |
| Tweets kept | 2304 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39bxpmve/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cushbomb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gd8zqob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gd8zqob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cushbomb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rossedwa/bert-take-uncased-f1-epoch-8 | rossedwa | 2022-09-23T12:27:37Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T12:16:01Z | f1_score = 85
Kaggle Score: 0.71711 |
pulkitkumar13/dark-bert-finetuned-ner1 | pulkitkumar13 | 2022-09-23T11:02:45Z | 110 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-23T10:40:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: dark-bert-finetuned-ner1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9337419247970846
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9411470072627097
- name: Accuracy
type: accuracy
value: 0.9861364572908695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dark-bert-finetuned-ner1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0833
- Precision: 0.9337
- Recall: 0.9487
- F1: 0.9411
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0358 | 1.0 | 1756 | 0.0780 | 0.9283 | 0.9409 | 0.9346 | 0.9844 |
| 0.0172 | 2.0 | 3512 | 0.0708 | 0.9375 | 0.9488 | 0.9431 | 0.9860 |
| 0.0056 | 3.0 | 5268 | 0.0833 | 0.9337 | 0.9487 | 0.9411 | 0.9861 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
rinascimento/distilbert-base-uncased-finetuned-emotion | rinascimento | 2022-09-23T09:52:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T06:15:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241401774459951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.815 | 1.0 | 250 | 0.3051 | 0.9045 | 0.9022 |
| 0.2496 | 2.0 | 500 | 0.2167 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
bryanleeharyanto/vtt-indonesia | bryanleeharyanto | 2022-09-23T06:39:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-20T07:59:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: vtt-indonesia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vtt-indonesia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
- Wer: 0.3582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7612 | 3.23 | 400 | 0.6405 | 0.6714 |
| 0.4143 | 6.45 | 800 | 0.3772 | 0.4974 |
| 0.2068 | 9.68 | 1200 | 0.3877 | 0.4442 |
| 0.1436 | 12.9 | 1600 | 0.3785 | 0.4212 |
| 0.1133 | 16.13 | 2000 | 0.3944 | 0.4144 |
| 0.09 | 19.35 | 2400 | 0.3695 | 0.3925 |
| 0.0705 | 22.58 | 2800 | 0.3706 | 0.3846 |
| 0.057 | 25.81 | 3200 | 0.3720 | 0.3725 |
| 0.048 | 29.03 | 3600 | 0.3472 | 0.3582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ryuno25/t5-base-finetuned-eli-5 | ryuno25 | 2022-09-23T06:29:14Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T04:40:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-base-finetuned-eli-5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 13.4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-eli-5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4557
- Rouge1: 13.4
- Rouge2: 1.9415
- Rougel: 10.4671
- Rougelsum: 12.0693
- Gen Len: 18.9529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 3.6754 | 1.0 | 8520 | 3.4557 | 13.4 | 1.9415 | 10.4671 | 12.0693 | 18.9529 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
SmilestheSad/hf_trainer | SmilestheSad | 2022-09-23T04:39:53Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T03:36:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: hf_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_trainer
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0708
- F1: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0344 | 1.0 | 565 | 0.0661 | 0.8811 |
| 0.0354 | 2.0 | 1130 | 0.0641 | 0.8963 |
| 0.0222 | 3.0 | 1695 | 0.0690 | 0.8994 |
| 0.0145 | 4.0 | 2260 | 0.0714 | 0.9036 |
| 0.011 | 5.0 | 2825 | 0.0708 | 0.9066 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
nvidia/tts_en_fastpitch | nvidia | 2022-09-23T04:28:43Z | 804 | 39 | nemo | [
"nemo",
"text-to-speech",
"speech",
"audio",
"Transformer",
"pytorch",
"NeMo",
"Riva",
"en",
"dataset:ljspeech",
"arxiv:2006.06873",
"arxiv:2108.10447",
"license:cc-by-4.0",
"region:us"
]
| text-to-speech | 2022-06-28T17:55:51Z | ---
language:
- en
library_name: nemo
datasets:
- ljspeech
thumbnail: null
tags:
- text-to-speech
- speech
- audio
- Transformer
- pytorch
- NeMo
- Riva
license: cc-by-4.0
---
# NVIDIA FastPitch (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
FastPitch [1] is a fully-parallel transformer architecture with prosody control over pitch and individual phoneme duration. Additionally, it uses an unsupervised speech-text aligner [2]. See the [model architecture](#model-architecture) section for complete architecture details.
It is also compatible with NVIDIA Riva for [production-grade server deployments](#deployment-with-nvidia-riva).
## Usage
The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version.
```
pip install nemo_toolkit['all']
```
### Automatically instantiate the model
Note: This model generates only spectrograms and a vocoder is needed to convert the spectrograms to waveforms.
In this example HiFiGAN is used.
```python
# Load FastPitch
from nemo.collections.tts.models import FastPitchModel
spec_generator = FastPitchModel.from_pretrained("nvidia/tts_en_fastpitch")
# Load vocoder
from nemo.collections.tts.models import HifiGanModel
model = HifiGanModel.from_pretrained(model_name="nvidia/tts_hifigan")
```
### Generate audio
```python
import soundfile as sf
parsed = spec_generator.parse("You can type your sentence here to get nemo to produce speech.")
spectrogram = spec_generator.generate_spectrogram(tokens=parsed)
audio = model.convert_spectrogram_to_audio(spec=spectrogram)
```
### Save the generated audio file
```python
# Save the audio to disk in a file called speech.wav
sf.write("speech.wav", audio.to('cpu').detach().numpy()[0], 22050)
```
### Input
This model accepts batches of text.
### Output
This model generates mel spectrograms.
## Model Architecture
FastPitch is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The model predicts pitch contours during inference. By altering these predictions, the generated speech can be more expressive, better match the semantic of the utterance, and in the end more engaging to the listener. FastPitch is based on a fully-parallel Transformer architecture, with a much higher real-time factor than Tacotron2 for the mel-spectrogram synthesis of a typical utterance. It uses an unsupervised speech-text aligner.
## Training
The NeMo toolkit [3] was used for training the models for 1000 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/fastpitch.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/fastpitch_align_v1.05.yaml).
### Datasets
This model is trained on LJSpeech sampled at 22050Hz, and has been tested on generating female English voices with an American accent.
## Performance
No performance information is available at this time.
## Limitations
This checkpoint only works well with vocoders that were trained on 22050Hz data. Otherwise, the generated audio may be scratchy or choppy-sounding.
## Deployment with NVIDIA Riva
For the best real-time accuracy, latency, and throughput, deploy the model with [NVIDIA Riva](https://developer.nvidia.com/riva), an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, at the edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and Enterprise-grade support
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
- [1] [FastPitch: Parallel Text-to-speech with Pitch Prediction](https://arxiv.org/abs/2006.06873)
- [2] [One TTS Alignment To Rule Them All](https://arxiv.org/abs/2108.10447)
- [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
apapiu/diffusion_model_aesthetic_keras | apapiu | 2022-09-23T03:56:11Z | 0 | 1 | null | [
"license:openrail",
"region:us"
]
| null | 2022-09-21T19:14:31Z | ---
license: openrail
---
A sample from the [Laion 6.5+ ](https://laion.ai/blog/laion-aesthetics/) image + text dataset. You can see
some samples [here](http://captions.christoph-schuhmann.de/2B-en-6.5.html).
The samples are resized + center-cropped to 64x64x3 and the .npz file also contains CLIP embeddings.
TODO: add img2dataset script.
The data can be used to train a basic text-to-image model.
|
wenkai-li/new_classifer_epoch7 | wenkai-li | 2022-09-23T03:35:38Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T02:09:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: new_classifer_epoch7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_classifer_epoch7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1305
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0526 | 1.0 | 4248 | 0.0587 | 0.9797 |
| 0.0259 | 2.0 | 8496 | 0.0502 | 0.9855 |
| 0.0121 | 3.0 | 12744 | 0.1170 | 0.9773 |
| 0.0051 | 4.0 | 16992 | 0.1379 | 0.9811 |
| 0.0026 | 5.0 | 21240 | 0.1014 | 0.9869 |
| 0.0013 | 6.0 | 25488 | 0.1312 | 0.9859 |
| 0.0002 | 7.0 | 29736 | 0.1305 | 0.9861 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
spacemanidol/t5-base-all-rewrite-correct-unchaged-no-prefix | spacemanidol | 2022-09-23T02:59:17Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-19T19:44:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-all-rewrite-correct-unchaged-no-prefix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-all-rewrite-correct-unchaged-no-prefix
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.7.0+cu110
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jamiehuang/t5-base-finetuned-xsum | jamiehuang | 2022-09-23T02:11:02Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-22T15:06:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
model-index:
- name: t5-base-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-xsum
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
neelmehta00/t5-base-finetuned-eli5 | neelmehta00 | 2022-09-22T23:16:27Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-22T15:04:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-base-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 14.5658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-eli5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1765
- Rouge1: 14.5658
- Rouge2: 2.2777
- Rougel: 11.2826
- Rougelsum: 13.1136
- Gen Len: 18.9938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3398 | 1.0 | 17040 | 3.1765 | 14.5658 | 2.2777 | 11.2826 | 13.1136 | 18.9938 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Kevin123/distilbert-base-uncased-finetuned-cola | Kevin123 | 2022-09-22T22:39:03Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T21:03:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5474713423103301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8663
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5248 | 1.0 | 535 | 0.5171 | 0.4210 |
| 0.3418 | 2.0 | 1070 | 0.4971 | 0.5236 |
| 0.2289 | 3.0 | 1605 | 0.6874 | 0.5023 |
| 0.1722 | 4.0 | 2140 | 0.7680 | 0.5392 |
| 0.118 | 5.0 | 2675 | 0.8663 | 0.5475 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
JJRohan/ppo-LunarLander-v2 | JJRohan | 2022-09-22T21:12:36Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-22T21:12:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 169.43 +/- 77.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical | nlp-guild | 2022-09-22T20:06:57Z | 136 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T19:42:37Z | fine-tuned bert-base-chinese for intent recognition task on [dataset](https://huggingface.co/datasets/nlp-guild/intent-recognition-biomedical)
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained("nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical")
model = AutoModelForSequenceClassification.from_pretrained("nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical")
nlp = TextClassificationPipeline(model = model, tokenizer = tokenizer)
label_set = [
'定义',
'病因',
'预防',
'临床表现(病症表现)',
'相关病症',
'治疗方法',
'所属科室',
'传染性',
'治愈率',
'禁忌',
'化验/体检方案',
'治疗时间',
'其他'
]
def readable_results(top_k:int, usr_query:str):
raw = nlp(usr_query, top_k = top_k)
def f(x):
index = int(x['label'][6:])
x['label'] = label_set[index]
for i in raw:
f(i)
return raw
readable_results(3,'得了心脏病怎么办')
'''
[{'label': '治疗方法', 'score': 0.9994503855705261},
{'label': '其他', 'score': 0.00018375989748165011},
{'label': '临床表现(病症表现)', 'score': 0.00010841596667887643}]
'''
``` |
SharpAI/benign-net-traffic-v2-t5-l12 | SharpAI | 2022-09-22T20:06:09Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-21T00:02:21Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: benign-net-traffic-v2-t5-l12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# benign-net-traffic-v2-t5-l12
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
TingChenChang/hpvqa-lcqmc-ocnli-cnsd-multi-MiniLM-v2 | TingChenChang | 2022-09-22T19:23:21Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-22T19:23:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jayanta/twitter-roberta-base-sentiment-sentiment-memes-30epcohs | jayanta | 2022-09-22T19:04:33Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T14:38:21Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-sentiment-sentiment-memes-30epcohs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-sentiment-memes-30epcohs
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3027
- Accuracy: 0.8517
- Precision: 0.8536
- Recall: 0.8517
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2504 | 1.0 | 2147 | 0.7129 | 0.8087 | 0.8112 | 0.8087 | 0.8036 |
| 0.2449 | 2.0 | 4294 | 0.7500 | 0.8229 | 0.8279 | 0.8229 | 0.8240 |
| 0.2652 | 3.0 | 6441 | 0.7460 | 0.8181 | 0.8185 | 0.8181 | 0.8149 |
| 0.2585 | 4.0 | 8588 | 0.7906 | 0.8155 | 0.8152 | 0.8155 | 0.8153 |
| 0.2534 | 5.0 | 10735 | 0.8178 | 0.8061 | 0.8180 | 0.8061 | 0.8080 |
| 0.2498 | 6.0 | 12882 | 0.8139 | 0.8166 | 0.8163 | 0.8166 | 0.8164 |
| 0.2825 | 7.0 | 15029 | 0.7494 | 0.8155 | 0.8210 | 0.8155 | 0.8168 |
| 0.2459 | 8.0 | 17176 | 0.8870 | 0.8061 | 0.8122 | 0.8061 | 0.8075 |
| 0.2303 | 9.0 | 19323 | 0.8699 | 0.7987 | 0.8060 | 0.7987 | 0.8003 |
| 0.2425 | 10.0 | 21470 | 0.8043 | 0.8244 | 0.8275 | 0.8244 | 0.8253 |
| 0.2143 | 11.0 | 23617 | 0.9163 | 0.8208 | 0.8251 | 0.8208 | 0.8219 |
| 0.2054 | 12.0 | 25764 | 0.8330 | 0.8239 | 0.8258 | 0.8239 | 0.8245 |
| 0.208 | 13.0 | 27911 | 1.0673 | 0.8134 | 0.8216 | 0.8134 | 0.8150 |
| 0.1668 | 14.0 | 30058 | 0.9071 | 0.8270 | 0.8276 | 0.8270 | 0.8273 |
| 0.1571 | 15.0 | 32205 | 0.9294 | 0.8339 | 0.8352 | 0.8339 | 0.8344 |
| 0.1857 | 16.0 | 34352 | 0.9909 | 0.8354 | 0.8350 | 0.8354 | 0.8352 |
| 0.1476 | 17.0 | 36499 | 0.9747 | 0.8433 | 0.8436 | 0.8433 | 0.8434 |
| 0.1341 | 18.0 | 38646 | 0.9372 | 0.8422 | 0.8415 | 0.8422 | 0.8415 |
| 0.1181 | 19.0 | 40793 | 1.0301 | 0.8433 | 0.8443 | 0.8433 | 0.8437 |
| 0.1192 | 20.0 | 42940 | 1.1332 | 0.8407 | 0.8415 | 0.8407 | 0.8410 |
| 0.0983 | 21.0 | 45087 | 1.2002 | 0.8428 | 0.8498 | 0.8428 | 0.8440 |
| 0.0951 | 22.0 | 47234 | 1.2141 | 0.8475 | 0.8504 | 0.8475 | 0.8483 |
| 0.0784 | 23.0 | 49381 | 1.1652 | 0.8407 | 0.8453 | 0.8407 | 0.8417 |
| 0.0623 | 24.0 | 51528 | 1.1730 | 0.8417 | 0.8443 | 0.8417 | 0.8425 |
| 0.054 | 25.0 | 53675 | 1.2900 | 0.8454 | 0.8496 | 0.8454 | 0.8464 |
| 0.0584 | 26.0 | 55822 | 1.2831 | 0.8480 | 0.8497 | 0.8480 | 0.8486 |
| 0.0531 | 27.0 | 57969 | 1.3043 | 0.8506 | 0.8524 | 0.8506 | 0.8512 |
| 0.0522 | 28.0 | 60116 | 1.2891 | 0.8527 | 0.8554 | 0.8527 | 0.8534 |
| 0.037 | 29.0 | 62263 | 1.3077 | 0.8538 | 0.8559 | 0.8538 | 0.8544 |
| 0.038 | 30.0 | 64410 | 1.3027 | 0.8517 | 0.8536 | 0.8517 | 0.8523 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1
|
cjj8168/stress_dreaddit | cjj8168 | 2022-09-22T16:46:33Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T16:44:29Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: stress_dreaddit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stress_dreaddit
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook | 2022-09-22T16:44:45Z | 111 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2110.06918",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T21:44:05Z | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the query encoder of the Wiki BM25 Lexical Model (Λ) from the SPAR paper:
[Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?](https://arxiv.org/abs/2110.06918)
<br>
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta and Wen-tau Yih
<br>
**Meta AI**
The associated github repo is available here: https://github.com/facebookresearch/dpr-scale/tree/main/spar
This model is a BERT-base sized dense retriever trained on Wikipedia articles to imitate the behavior of BM25.
The following models are also available:
Pretrained Model | Corpus | Teacher | Architecture | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
Wiki BM25 Λ | Wikipedia | BM25 | BERT-base | facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook/spar-wiki-bm25-lexmodel-context-encoder
PAQ BM25 Λ | PAQ | BM25 | BERT-base | facebook/spar-paq-bm25-lexmodel-query-encoder | facebook/spar-paq-bm25-lexmodel-context-encoder
MARCO BM25 Λ | MS MARCO | BM25 | BERT-base | facebook/spar-marco-bm25-lexmodel-query-encoder | facebook/spar-marco-bm25-lexmodel-context-encoder
MARCO UniCOIL Λ | MS MARCO | UniCOIL | BERT-base | facebook/spar-marco-unicoil-lexmodel-query-encoder | facebook/spar-marco-unicoil-lexmodel-context-encoder
# Using the Lexical Model (Λ) Alone
This model should be used together with the associated context encoder, similar to the [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr) model.
```
import torch
from transformers import AutoTokenizer, AutoModel
# The tokenizer is the same for the query and context encoder
tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 341.3268
score2 = query_emb @ ctx_emb[1] # 340.1626
```
# Using the Lexical Model (Λ) with a Base Dense Retriever as in SPAR
As Λ learns lexical matching from a sparse teacher retriever, it can be used in combination with a standard dense retriever (e.g. [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr#dpr), [Contriever](https://huggingface.co/facebook/contriever-msmarco)) to build a dense retriever that excels at both lexical and semantic matching.
In the following example, we show how to build the SPAR-Wiki model for Open-Domain Question Answering by concatenating the embeddings of DPR and the Wiki BM25 Λ.
```
import torch
from transformers import AutoTokenizer, AutoModel
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
# DPR model
dpr_ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
dpr_query_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
# Wiki BM25 Λ model
lexmodel_tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute DPR embeddings
dpr_query_input = dpr_query_tokenizer(query, return_tensors='pt')['input_ids']
dpr_query_emb = dpr_query_encoder(dpr_query_input).pooler_output
dpr_ctx_input = dpr_ctx_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
dpr_ctx_emb = dpr_ctx_encoder(**dpr_ctx_input).pooler_output
# Compute Λ embeddings
lexmodel_query_input = lexmodel_tokenizer(query, return_tensors='pt')
lexmodel_query_emb = lexmodel_query_encoder(**query_input).last_hidden_state[:, 0, :]
lexmodel_ctx_input = lexmodel_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
lexmodel_ctx_emb = lexmodel_context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Form SPAR embeddings via concatenation
# The concatenation weight is only applied to query embeddings
# Refer to the SPAR paper for details
concat_weight = 0.7
spar_query_emb = torch.cat(
[dpr_query_emb, concat_weight * lexmodel_query_emb],
dim=-1,
)
spar_ctx_emb = torch.cat(
[dpr_ctx_emb, lexmodel_ctx_emb],
dim=-1,
)
# Compute similarity scores
score1 = spar_query_emb @ spar_ctx_emb[0] # 317.6931
score2 = spar_query_emb @ spar_ctx_emb[1] # 314.6144
```
|
CoreyMorris/Reinforce-cartpole-v1 | CoreyMorris | 2022-09-22T16:21:40Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-22T16:20:39Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Chemsseddine/distilbert-base-uncased-finetuned-cola | Chemsseddine | 2022-09-22T15:31:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-07T17:23:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 2.1485 |
| No log | 2.0 | 10 | 2.0983 |
| No log | 3.0 | 15 | 2.0499 |
| No log | 4.0 | 20 | 2.0155 |
| No log | 5.0 | 25 | 2.0011 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
jayanta/twitter-roberta-base-sentiment-sentiment-memes | jayanta | 2022-09-22T14:35:05Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-19T15:25:56Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-sentiment-sentiment-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-sentiment-memes
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9582
- Accuracy: 0.8187
- Precision: 0.8199
- Recall: 0.8187
- F1: 0.8191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4673 | 1.0 | 2147 | 0.4373 | 0.7647 | 0.8180 | 0.7647 | 0.7657 |
| 0.3987 | 2.0 | 4294 | 0.5528 | 0.7783 | 0.8096 | 0.7783 | 0.7804 |
| 0.3194 | 3.0 | 6441 | 0.6432 | 0.7752 | 0.7767 | 0.7752 | 0.7680 |
| 0.2855 | 4.0 | 8588 | 0.6820 | 0.7814 | 0.8034 | 0.7814 | 0.7837 |
| 0.2575 | 5.0 | 10735 | 0.7427 | 0.7720 | 0.8070 | 0.7720 | 0.7741 |
| 0.2154 | 6.0 | 12882 | 0.8225 | 0.7987 | 0.8062 | 0.7987 | 0.8004 |
| 0.2195 | 7.0 | 15029 | 0.8361 | 0.8071 | 0.8086 | 0.8071 | 0.8077 |
| 0.2322 | 8.0 | 17176 | 0.8842 | 0.8056 | 0.8106 | 0.8056 | 0.8069 |
| 0.2102 | 9.0 | 19323 | 0.9188 | 0.8129 | 0.8144 | 0.8129 | 0.8135 |
| 0.1893 | 10.0 | 21470 | 0.9582 | 0.8187 | 0.8199 | 0.8187 | 0.8191 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1
|
huggingtweets/slime_machine | huggingtweets | 2022-09-22T14:09:28Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/slime_machine/1663855763474/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1554733825220939777/lgFt_2e1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">slime</div>
<div style="text-align: center; font-size: 14px;">@slime_machine</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from slime.
| Data | slime |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 441 |
| Short tweets | 589 |
| Tweets kept | 2199 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s9inuxg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @slime_machine's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5xjy8nrj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5xjy8nrj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/slime_machine')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/pixel-mania | sd-concepts-library | 2022-09-22T14:05:08Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T05:26:54Z | ---
license: mit
---
### pixel-mania on Stable Diffusion
This is the `<pixel-mania>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
|
m-lin20/satellite-instrument-roberta-NER | m-lin20 | 2022-09-22T13:33:22Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Poised for launch in mid-2021, the joint NASA-USGS Landsat 9 mission will continue this important data record. In many respects Landsat 9 is a clone of Landsat-8. The Operational Land Imager-2 (OLI-2) is largely identical to Landsat 8 OLI, providing calibrated imagery covering the solar reflected wavelengths. The Thermal Infrared Sensor-2 (TIRS-2) improves upon Landsat 8 TIRS, addressing known issues including stray light incursion and a malfunction of the instrument scene select mirror. In addition, Landsat 9 adds redundancy to TIRS-2, thus upgrading the instrument to a 5-year design life commensurate with other elements of the mission. Initial performance testing of OLI-2 and TIRS-2 indicate that the instruments are of excellent quality and expected to match or improve on Landsat 8 data quality. "
example_title: "example 1"
- text: "Compared to its predecessor, Jason-3, the two AMR-C radiometer instruments have an external calibration system which enables higher radiometric stability accomplished by moving the secondary mirror between well-defined targets. Sentinel-6 allows continuing the study of the ocean circulation, climate change, and sea-level rise for at least another decade. Besides the external calibration for the AMR heritage radiometer (18.7, 23.8, and 34 GHz channels), the AMR-C contains a high-resolution microwave radiometer (HRMR) with radiometer channels at 90, 130, and 168 GHz. This subsystem allows for a factor of 5× higher spatial resolution at coastal transitions. This article presents a brief description of the instrument and the measured performance of the completed AMR-C-A and AMR-C-B instruments."
example_title: "example 2"
- text: "The Landsat 9 will continue the Landsat data record into its fifth decade with a near-copy build of Landsat 8 with launch scheduled for December 2020. The two instruments on Landsat 9 are Thermal Infrared Sensor-2 (TIRS-2) and Operational Land Imager-2 (OLI-2)."
example_title: "example 3"
inference:
parameters:
aggregation_strategy: "simple"
---
# satellite-instrument-roberta-NER
For details, please visit the [GitHub link](https://github.com/THU-EarthInformationScienceLab/Satellite-Instrument-NER).
## Citation
Our [paper](https://www.tandfonline.com/doi/full/10.1080/17538947.2022.2107098) has been published in the International Journal of Digital Earth :
```bibtex
@article{lin2022satellite,
title={Satellite and instrument entity recognition using a pre-trained language model with distant supervision},
author={Lin, Ming and Jin, Meng and Liu, Yufu and Bai, Yuqi},
journal={International Journal of Digital Earth},
volume={15},
number={1},
pages={1290--1304},
year={2022},
publisher={Taylor \& Francis}
}
``` |
m-lin20/satellite-instrument-bert-NER | m-lin20 | 2022-09-22T13:32:42Z | 104 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Poised for launch in mid-2021, the joint NASA-USGS Landsat 9 mission will continue this important data record. In many respects Landsat 9 is a clone of Landsat-8. The Operational Land Imager-2 (OLI-2) is largely identical to Landsat 8 OLI, providing calibrated imagery covering the solar reflected wavelengths. The Thermal Infrared Sensor-2 (TIRS-2) improves upon Landsat 8 TIRS, addressing known issues including stray light incursion and a malfunction of the instrument scene select mirror. In addition, Landsat 9 adds redundancy to TIRS-2, thus upgrading the instrument to a 5-year design life commensurate with other elements of the mission. Initial performance testing of OLI-2 and TIRS-2 indicate that the instruments are of excellent quality and expected to match or improve on Landsat 8 data quality. "
example_title: "example 1"
- text: "Compared to its predecessor, Jason-3, the two AMR-C radiometer instruments have an external calibration system which enables higher radiometric stability accomplished by moving the secondary mirror between well-defined targets. Sentinel-6 allows continuing the study of the ocean circulation, climate change, and sea-level rise for at least another decade. Besides the external calibration for the AMR heritage radiometer (18.7, 23.8, and 34 GHz channels), the AMR-C contains a high-resolution microwave radiometer (HRMR) with radiometer channels at 90, 130, and 168 GHz. This subsystem allows for a factor of 5× higher spatial resolution at coastal transitions. This article presents a brief description of the instrument and the measured performance of the completed AMR-C-A and AMR-C-B instruments."
example_title: "example 2"
- text: "Landsat 9 will continue the Landsat data record into its fifth decade with a near-copy build of Landsat 8 with launch scheduled for December 2020. The two instruments on Landsat 9 are Thermal Infrared Sensor-2 (TIRS-2) and Operational Land Imager-2 (OLI-2)."
example_title: "example 3"
inference:
parameters:
aggregation_strategy: "first"
---
# satellite-instrument-bert-NER
For details, please visit the [GitHub link](https://github.com/THU-EarthInformationScienceLab/Satellite-Instrument-NER).
## Citation
Our [paper](https://www.tandfonline.com/doi/full/10.1080/17538947.2022.2107098) has been published in the International Journal of Digital Earth :
```bibtex
@article{lin2022satellite,
title={Satellite and instrument entity recognition using a pre-trained language model with distant supervision},
author={Lin, Ming and Jin, Meng and Liu, Yufu and Bai, Yuqi},
journal={International Journal of Digital Earth},
volume={15},
number={1},
pages={1290--1304},
year={2022},
publisher={Taylor \& Francis}
}
``` |
fxmarty/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic | fxmarty | 2022-09-22T13:28:21Z | 3 | 0 | transformers | [
"transformers",
"onnx",
"distilbert",
"text-classification",
"dataset:sst2",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T13:19:36Z | ---
license: apache-2.0
datasets:
- sst2
- glue
---
This model is a fork of https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english , quantized using dynamic Post-Training Quantization (PTQ) with ONNX Runtime and 🤗 Optimum library.
It achieves 0.901 on the validation set.
To load this model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained("fxmarty/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")
```
|
sd-concepts-library/bluebey-2 | sd-concepts-library | 2022-09-22T12:21:34Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T12:21:30Z | ---
license: mit
---
### Bluebey-2 on Stable Diffusion
This is the `<bluebey>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
muhtasham/bert-small-finetuned-finer | muhtasham | 2022-09-22T11:50:51Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-25T20:36:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-finer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-finer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8994 | 1.0 | 2433 | 1.7597 |
| 1.7226 | 2.0 | 4866 | 1.6462 |
| 1.6752 | 3.0 | 7299 | 1.6137 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-parsed20 | muhtasham | 2022-09-22T11:34:48Z | 179 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-17T13:31:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-parsed20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-parsed20
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.0763 |
| No log | 2.0 | 8 | 2.8723 |
| No log | 3.0 | 12 | 3.5102 |
| No log | 4.0 | 16 | 2.8641 |
| No log | 5.0 | 20 | 2.7827 |
| No log | 6.0 | 24 | 2.8163 |
| No log | 7.0 | 28 | 3.2415 |
| No log | 8.0 | 32 | 3.0477 |
| No log | 9.0 | 36 | 3.5160 |
| No log | 10.0 | 40 | 3.1248 |
| No log | 11.0 | 44 | 3.2159 |
| No log | 12.0 | 48 | 3.2177 |
| No log | 13.0 | 52 | 2.9108 |
| No log | 14.0 | 56 | 3.3758 |
| No log | 15.0 | 60 | 3.1335 |
| No log | 16.0 | 64 | 2.9753 |
| No log | 17.0 | 68 | 2.9922 |
| No log | 18.0 | 72 | 3.2798 |
| No log | 19.0 | 76 | 2.7280 |
| No log | 20.0 | 80 | 3.1193 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-parsed-longer50 | muhtasham | 2022-09-22T11:34:27Z | 179 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-17T13:39:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-finetuned-parsed-longer50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-finetuned-parsed-longer50
This model is a fine-tuned version of [muhtasham/bert-small-finetuned-parsed20](https://huggingface.co/muhtasham/bert-small-finetuned-parsed20) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 2.9807 |
| No log | 2.0 | 8 | 2.7267 |
| No log | 3.0 | 12 | 3.3484 |
| No log | 4.0 | 16 | 2.7573 |
| No log | 5.0 | 20 | 2.7063 |
| No log | 6.0 | 24 | 2.7353 |
| No log | 7.0 | 28 | 3.1290 |
| No log | 8.0 | 32 | 2.9371 |
| No log | 9.0 | 36 | 3.4265 |
| No log | 10.0 | 40 | 3.0537 |
| No log | 11.0 | 44 | 3.1382 |
| No log | 12.0 | 48 | 3.1454 |
| No log | 13.0 | 52 | 2.8379 |
| No log | 14.0 | 56 | 3.2760 |
| No log | 15.0 | 60 | 3.0504 |
| No log | 16.0 | 64 | 2.9001 |
| No log | 17.0 | 68 | 2.8892 |
| No log | 18.0 | 72 | 3.1837 |
| No log | 19.0 | 76 | 2.6404 |
| No log | 20.0 | 80 | 3.0600 |
| No log | 21.0 | 84 | 3.1432 |
| No log | 22.0 | 88 | 2.9608 |
| No log | 23.0 | 92 | 3.0513 |
| No log | 24.0 | 96 | 3.1038 |
| No log | 25.0 | 100 | 3.0975 |
| No log | 26.0 | 104 | 2.8977 |
| No log | 27.0 | 108 | 2.9416 |
| No log | 28.0 | 112 | 2.9015 |
| No log | 29.0 | 116 | 2.7947 |
| No log | 30.0 | 120 | 2.9278 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-e | thisisHJLee | 2022-09-22T10:38:09Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-22T08:52:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-e
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6640
- Cer: 0.1518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3827 | 3.25 | 500 | 3.5391 | 1.0 |
| 3.0633 | 6.49 | 1000 | 2.9854 | 0.8759 |
| 1.2095 | 9.74 | 1500 | 0.8384 | 0.2103 |
| 0.5499 | 12.99 | 2000 | 0.6733 | 0.1689 |
| 0.3815 | 16.23 | 2500 | 0.6778 | 0.1591 |
| 0.3111 | 19.48 | 3000 | 0.6640 | 0.1518 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.0
|
mahaveer/ppo-LunarLander-v2 | mahaveer | 2022-09-22T10:11:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-22T09:57:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 194.40 +/- 31.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GItaf/gpt2-gpt2-TF-weight1-epoch10 | GItaf | 2022-09-22T09:36:24Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-21T08:05:36Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-TF-weight1-epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-TF-weight1-epoch10
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/roberta-base-roberta-base-TF-weight1-epoch10 | GItaf | 2022-09-22T09:35:57Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-22T09:34:27Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-roberta-base-TF-weight1-epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-roberta-base-TF-weight1-epoch10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/roberta-base-roberta-base-TF-weight1-epoch5 | GItaf | 2022-09-22T09:32:53Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-22T09:31:40Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-roberta-base-TF-weight1-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-roberta-base-TF-weight1-epoch5
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/roberta-base-roberta-base-TF-weight1-epoch15 | GItaf | 2022-09-22T09:23:00Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-21T15:32:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-roberta-base-TF-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-roberta-base-TF-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8322
- Cls loss: 0.6900
- Lm loss: 4.1423
- Cls Accuracy: 0.5401
- Cls F1: 0.3788
- Cls Precision: 0.2917
- Cls Recall: 0.5401
- Perplexity: 62.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.3158 | 1.0 | 3470 | 4.9858 | 0.6910 | 4.2949 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 73.32 |
| 4.9772 | 2.0 | 6940 | 4.8876 | 0.6956 | 4.1920 | 0.4599 | 0.2898 | 0.2115 | 0.4599 | 66.15 |
| 4.8404 | 3.0 | 10410 | 4.8454 | 0.6901 | 4.1553 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 63.77 |
| 4.7439 | 4.0 | 13880 | 4.8177 | 0.6904 | 4.1274 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.02 |
| 4.6667 | 5.0 | 17350 | 4.8065 | 0.6903 | 4.1163 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.33 |
| 4.6018 | 6.0 | 20820 | 4.8081 | 0.6963 | 4.1119 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.06 |
| 4.5447 | 7.0 | 24290 | 4.8089 | 0.6912 | 4.1177 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.42 |
| 4.4944 | 8.0 | 27760 | 4.8128 | 0.6900 | 4.1228 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.73 |
| 4.4505 | 9.0 | 31230 | 4.8152 | 0.6905 | 4.1248 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.85 |
| 4.4116 | 10.0 | 34700 | 4.8129 | 0.6908 | 4.1221 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.69 |
| 4.3787 | 11.0 | 38170 | 4.8146 | 0.6906 | 4.1241 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.81 |
| 4.3494 | 12.0 | 41640 | 4.8229 | 0.6900 | 4.1329 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.36 |
| 4.3253 | 13.0 | 45110 | 4.8287 | 0.6900 | 4.1388 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.73 |
| 4.3075 | 14.0 | 48580 | 4.8247 | 0.6900 | 4.1347 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.47 |
| 4.2936 | 15.0 | 52050 | 4.8322 | 0.6900 | 4.1423 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.95 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
GItaf/gpt2-gpt2-TF-weight1-epoch15 | GItaf | 2022-09-22T09:21:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-21T15:31:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-TF-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-TF-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0647
- Cls loss: 2.1295
- Lm loss: 3.9339
- Cls Accuracy: 0.8375
- Cls F1: 0.8368
- Cls Precision: 0.8381
- Cls Recall: 0.8375
- Perplexity: 51.11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Cls loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Lm loss | Perplexity | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.8702 | 1.0 | 3470 | 0.6951 | 0.7752 | 0.7670 | 0.7978 | 0.7752 | 4.0201 | 55.71 | 4.7157 |
| 4.5856 | 2.0 | 6940 | 0.6797 | 0.8352 | 0.8333 | 0.8406 | 0.8352 | 3.9868 | 53.88 | 4.6669 |
| 4.4147 | 3.0 | 10410 | 0.6899 | 0.8375 | 0.8368 | 0.8384 | 0.8375 | 3.9716 | 53.07 | 4.6619 |
| 4.2479 | 4.0 | 13880 | 0.8678 | 0.8403 | 0.8396 | 0.8413 | 0.8403 | 3.9622 | 52.57 | 4.8305 |
| 4.1281 | 5.0 | 17350 | 0.9747 | 0.8340 | 0.8334 | 0.8346 | 0.8340 | 3.9596 | 52.44 | 4.9349 |
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity|
|:-------------:|:-----:|:-----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.195 | 6.0 | 20820 | 4.9303 | 0.9770 | 3.9528 | 0.8300 | 0.8299 | 0.8299 | 0.8300 | 52.08 |
| 4.0645 | 7.0 | 24290 | 5.0425 | 1.0979 | 3.9440 | 0.8317 | 0.8313 | 0.8317 | 0.8317 | 51.62 |
| 3.9637 | 8.0 | 27760 | 5.3955 | 1.4533 | 3.9414 | 0.8329 | 0.8325 | 0.8328 | 0.8329 | 51.49 |
| 3.9094 | 9.0 | 31230 | 5.6029 | 1.6645 | 3.9375 | 0.8231 | 0.8233 | 0.8277 | 0.8231 | 51.29 |
| 3.8661 | 10.0 | 34700 | 5.8175 | 1.8821 | 3.9344 | 0.8144 | 0.8115 | 0.8222 | 0.8144 | 51.13 |
| 3.8357 | 11.0 | 38170 | 5.6824 | 1.7494 | 3.9319 | 0.8340 | 0.8336 | 0.8342 | 0.8340 | 51.01 |
| 3.8019 | 12.0 | 41640 | 5.8509 | 1.9167 | 3.9332 | 0.8369 | 0.8357 | 0.8396 | 0.8369 | 51.07 |
| 3.7815 | 13.0 | 45110 | 5.9044 | 1.9686 | 3.9346 | 0.8409 | 0.8407 | 0.8408 | 0.8409 | 51.14 |
| 3.7662 | 14.0 | 48580 | 6.0088 | 2.0738 | 3.9337 | 0.8363 | 0.8359 | 0.8364 | 0.8363 | 51.10 |
| 3.7524 | 15.0 | 52050 | 6.0647 | 2.1295 | 3.9339 | 0.8375 | 0.8368 | 0.8381 | 0.8375 | 51.11 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
chintagunta85/electramed-small-deid2014-ner-v5-classweights | chintagunta85 | 2022-09-22T09:08:27Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:i2b22014",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-22T07:48:30Z | ---
tags:
- generated_from_trainer
datasets:
- i2b22014
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-deid2014-ner-v5-classweights
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: i2b22014
type: i2b22014
config: i2b22014-deid
split: train
args: i2b22014-deid
metrics:
- name: Precision
type: precision
value: 0.8832236842105263
- name: Recall
type: recall
value: 0.6910561632502987
- name: F1
type: f1
value: 0.7754112732711052
- name: Accuracy
type: accuracy
value: 0.9883040491052534
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-deid2014-ner-v5-classweights
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the i2b22014 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Precision: 0.8832
- Recall: 0.6911
- F1: 0.7754
- Accuracy: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0001 | 1.0 | 1838 | 0.0008 | 0.7702 | 0.3780 | 0.5071 | 0.9771 |
| 0.0 | 2.0 | 3676 | 0.0007 | 0.8753 | 0.5671 | 0.6883 | 0.9827 |
| 0.0 | 3.0 | 5514 | 0.0006 | 0.8074 | 0.4128 | 0.5463 | 0.9775 |
| 0.0 | 4.0 | 7352 | 0.0007 | 0.8693 | 0.6102 | 0.7170 | 0.9848 |
| 0.0 | 5.0 | 9190 | 0.0006 | 0.8710 | 0.6022 | 0.7121 | 0.9849 |
| 0.0 | 6.0 | 11028 | 0.0007 | 0.8835 | 0.6547 | 0.7521 | 0.9867 |
| 0.0 | 7.0 | 12866 | 0.0009 | 0.8793 | 0.6661 | 0.7579 | 0.9873 |
| 0.0 | 8.0 | 14704 | 0.0008 | 0.8815 | 0.6740 | 0.7639 | 0.9876 |
| 0.0 | 9.0 | 16542 | 0.0009 | 0.8812 | 0.6851 | 0.7709 | 0.9880 |
| 0.0 | 10.0 | 18380 | 0.0009 | 0.8832 | 0.6911 | 0.7754 | 0.9883 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
prakashkmr48/Prompt-image-inpainting | prakashkmr48 | 2022-09-22T08:58:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-09-22T08:51:46Z | git lfs install
git clone https://huggingface.co/prakashkmr48/Prompt-image-inpainting |
Hoax0930/kyoto_marian | Hoax0930 | 2022-09-22T08:32:43Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-09-22T07:47:04Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: kyoto_marian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kyoto_marian
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1941
- Bleu: 13.4500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/test2 | sd-concepts-library | 2022-09-22T06:29:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T06:29:45Z | ---
license: mit
---
### TEST2 on Stable Diffusion
This is the `<AIOCARD>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
airakoze/Lab04 | airakoze | 2022-09-22T05:34:23Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-09-22T03:48:36Z | ---
title: Housing price prediction in California
colorFrom: gray
colorTo: red
sdk: gradio
sdk_version: 3.0.4
app_file: app.py
pinned: false
--- |
sd-concepts-library/bee | sd-concepts-library | 2022-09-22T05:01:07Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T05:00:56Z | ---
license: mit
---
### BEE on Stable Diffusion
This is the `<b-e-e>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/yinit | sd-concepts-library | 2022-09-22T04:58:38Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T04:58:24Z | ---
license: mit
---
### yinit on Stable Diffusion
This is the `yinit-dropcap` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


























|
sd-concepts-library/ibere-thenorio | sd-concepts-library | 2022-09-22T04:52:22Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T04:52:08Z | ---
license: mit
---
### Iberê Thenório on Stable Diffusion
This is the `<ibere-thenorio>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
ashiqabdulkhader/GPT2-Poet | ashiqabdulkhader | 2022-09-22T03:24:00Z | 381 | 3 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-22T02:45:20Z | ---
license: bigscience-bloom-rail-1.0
widget :
- text: "I used to have a lover"
example_title: "I used to have a lover"
- text : "The old cupola glinted above the clouds"
example_title: "The old cupola"
- text : "Behind the silo, the Mother Rabbit hunches"
example_title : "Behind the silo"
---
# GPT2-Poet
## Model description
GPT2-Poet is a GPT-2 transformer model fine Tuned on a large corpus of English Poems dataset in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
## Usage
You can use this model for English Poem generation:
```python
>>> from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
>>> tokenizer = GPT2Tokenizer.from_pretrained("ashiqabdulkhader/GPT2-Poet")
>>> model = TFGPT2LMHeadModel.from_pretrained("ashiqabdulkhader/GPT2-Poet")
>>> text = "The quick brown fox"
>>> input_ids = tokenizer.encode(text, return_tensors='tf')
>>> sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=100,
top_k=0,
top_p=0.9,
temperature=1.0,
num_return_sequences=3
)
>>> print("Output:", tokenizer.decode(sample_outputs[0], skip_special_tokens=True))
```
|
bdotloh/distilbert-base-uncased-go-emotion-empathetic-dialogues-context-v2 | bdotloh | 2022-09-22T03:14:36Z | 106 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"emotion-classification",
"en",
"dataset:go-emotions",
"dataset:bdotloh/empathetic-dialogues-contexts",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-21T07:35:25Z | ---
language: en
tags:
- emotion-classification
datasets:
- go-emotions
- bdotloh/empathetic-dialogues-contexts
---
# Model Description
We performed transfer learning experiments on a distilbert-base-uncased model fine-tuned on the GoEmotions dataset for the purpose of classifying [(emotional) contexts in the Empathetic Dialogues dataset](https://huggingface.co/datasets/bdotloh/empathetic-dialogues-contexts).
The fine-tuned distilbert-base-uncased can be found [here](https://huggingface.co/bhadresh-savani/bert-base-go-emotion).
### Limitations and bias
GoEmotions:
1) Demographics of Reddit Users
2) Imbalanced class distribution
3) ...
EmpatheticDialogues:
1) Unable to ascertain the degree of cultural specificity for the context that a respondent described when given an emotion label (i.e., p(description | emotion, *culture*))
2) ...
## Training data
## Training procedure
### Preprocessing
## Evaluation results
|
sd-concepts-library/char-con | sd-concepts-library | 2022-09-22T02:54:22Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T02:54:17Z | ---
license: mit
---
### char-con on Stable Diffusion
This is the `<char-con>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
yuntian-deng/latex2im_ss | yuntian-deng | 2022-09-22T02:20:24Z | 1 | 0 | diffusers | [
"diffusers",
"en",
"dataset:yuntian-deng/im2latex-100k",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-22T02:19:32Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: yuntian-deng/im2latex-100k
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# latex2im_ss
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `yuntian-deng/im2latex-100k` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/yuntian-deng/latex2im_ss/tensorboard?#scalars)
|
Subsets and Splits