modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NasimB/gpt2-concat-aochildes-len-16k-punc-dot | NasimB | 2023-07-06T11:58:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T10:05:29Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16k-punc-dot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16k-punc-dot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7347 | 0.29 | 500 | 5.6594 |
| 5.3783 | 0.59 | 1000 | 5.2121 |
| 5.0252 | 0.88 | 1500 | 4.9610 |
| 4.7546 | 1.18 | 2000 | 4.8238 |
| 4.5897 | 1.47 | 2500 | 4.6965 |
| 4.4789 | 1.77 | 3000 | 4.5879 |
| 4.3473 | 2.06 | 3500 | 4.5156 |
| 4.1614 | 2.35 | 4000 | 4.4620 |
| 4.1298 | 2.65 | 4500 | 4.4035 |
| 4.0926 | 2.94 | 5000 | 4.3498 |
| 3.873 | 3.24 | 5500 | 4.3486 |
| 3.8259 | 3.53 | 6000 | 4.3189 |
| 3.809 | 3.83 | 6500 | 4.2819 |
| 3.6844 | 4.12 | 7000 | 4.2885 |
| 3.5391 | 4.41 | 7500 | 4.2779 |
| 3.5315 | 4.71 | 8000 | 4.2655 |
| 3.5178 | 5.0 | 8500 | 4.2534 |
| 3.3396 | 5.3 | 9000 | 4.2694 |
| 3.3435 | 5.59 | 9500 | 4.2672 |
| 3.3344 | 5.89 | 10000 | 4.2660 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bugsys0302/undressav05 | Bugsys0302 | 2023-07-06T11:56:51Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T11:52:15Z | ---
license: creativeml-openrail-m
---
|
tom192180/distilbert-base-uncased-finetuned-squad | tom192180 | 2023-07-06T11:49:51Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T09:37:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 57 | 3.5390 |
| No log | 2.0 | 114 | 3.2458 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent | jordyvl | 2023-07-06T11:45:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T09:34:33Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3493
- Accuracy: 0.645
- Exit 0 Accuracy: 0.1125
- Exit 1 Accuracy: 0.155
- Exit 2 Accuracy: 0.3775
- Exit 3 Accuracy: 0.5225
- Exit 4 Accuracy: 0.5875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7604 | 0.1075 | 0.09 | 0.0675 | 0.1075 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7329 | 0.1125 | 0.0725 | 0.065 | 0.13 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6989 | 0.1325 | 0.08 | 0.06 | 0.1375 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6608 | 0.17 | 0.08 | 0.0575 | 0.1375 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6201 | 0.19 | 0.09 | 0.0525 | 0.1175 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5813 | 0.2175 | 0.095 | 0.0825 | 0.1125 | 0.0675 | 0.0625 |
| No log | 6.72 | 14 | 2.5503 | 0.215 | 0.0925 | 0.08 | 0.12 | 0.0825 | 0.0625 |
| No log | 7.72 | 16 | 2.5289 | 0.23 | 0.09 | 0.0925 | 0.15 | 0.1025 | 0.0625 |
| No log | 8.72 | 18 | 2.5344 | 0.245 | 0.0975 | 0.1 | 0.165 | 0.105 | 0.0675 |
| No log | 9.72 | 20 | 2.5533 | 0.265 | 0.1 | 0.0975 | 0.185 | 0.09 | 0.1025 |
| No log | 10.72 | 22 | 2.4567 | 0.29 | 0.0975 | 0.13 | 0.2 | 0.1 | 0.095 |
| No log | 11.72 | 24 | 2.3982 | 0.3 | 0.1 | 0.12 | 0.205 | 0.1125 | 0.09 |
| No log | 12.72 | 26 | 2.3722 | 0.3075 | 0.1025 | 0.1175 | 0.195 | 0.13 | 0.0825 |
| No log | 13.72 | 28 | 2.3546 | 0.31 | 0.105 | 0.1225 | 0.1825 | 0.1425 | 0.085 |
| No log | 14.72 | 30 | 2.3287 | 0.315 | 0.11 | 0.125 | 0.195 | 0.1775 | 0.095 |
| No log | 15.72 | 32 | 2.2970 | 0.32 | 0.1075 | 0.13 | 0.2175 | 0.2275 | 0.1 |
| No log | 16.72 | 34 | 2.2763 | 0.325 | 0.1075 | 0.14 | 0.225 | 0.2375 | 0.1075 |
| No log | 17.72 | 36 | 2.3456 | 0.3075 | 0.105 | 0.14 | 0.2375 | 0.18 | 0.1275 |
| No log | 18.72 | 38 | 2.3160 | 0.325 | 0.115 | 0.14 | 0.24 | 0.175 | 0.16 |
| No log | 19.72 | 40 | 2.2257 | 0.33 | 0.1225 | 0.14 | 0.245 | 0.225 | 0.17 |
| No log | 20.72 | 42 | 2.1769 | 0.355 | 0.125 | 0.1425 | 0.26 | 0.2725 | 0.135 |
| No log | 21.72 | 44 | 2.1449 | 0.355 | 0.125 | 0.14 | 0.2725 | 0.3125 | 0.1175 |
| No log | 22.72 | 46 | 2.1200 | 0.3675 | 0.125 | 0.1425 | 0.27 | 0.3125 | 0.115 |
| No log | 23.72 | 48 | 2.0995 | 0.3725 | 0.1225 | 0.1425 | 0.2625 | 0.31 | 0.115 |
| No log | 24.72 | 50 | 2.0769 | 0.3825 | 0.12 | 0.1425 | 0.2725 | 0.3375 | 0.1125 |
| No log | 25.72 | 52 | 2.0473 | 0.3975 | 0.115 | 0.14 | 0.285 | 0.335 | 0.1325 |
| No log | 26.72 | 54 | 2.0094 | 0.4075 | 0.115 | 0.14 | 0.2925 | 0.3075 | 0.1525 |
| No log | 27.72 | 56 | 1.9660 | 0.435 | 0.1175 | 0.14 | 0.29 | 0.2725 | 0.21 |
| No log | 28.72 | 58 | 1.9271 | 0.46 | 0.11 | 0.1425 | 0.3025 | 0.27 | 0.235 |
| No log | 29.72 | 60 | 1.8910 | 0.4825 | 0.11 | 0.145 | 0.305 | 0.27 | 0.2525 |
| No log | 30.72 | 62 | 1.8619 | 0.475 | 0.11 | 0.1475 | 0.3 | 0.2875 | 0.27 |
| No log | 31.72 | 64 | 1.8215 | 0.5025 | 0.11 | 0.15 | 0.3025 | 0.305 | 0.325 |
| No log | 32.72 | 66 | 1.7845 | 0.52 | 0.1125 | 0.15 | 0.3175 | 0.3225 | 0.3625 |
| No log | 33.72 | 68 | 1.7509 | 0.5375 | 0.1125 | 0.15 | 0.325 | 0.3525 | 0.3975 |
| No log | 34.72 | 70 | 1.7237 | 0.545 | 0.1075 | 0.15 | 0.3325 | 0.365 | 0.4275 |
| No log | 35.72 | 72 | 1.6970 | 0.555 | 0.11 | 0.15 | 0.3275 | 0.4 | 0.4475 |
| No log | 36.72 | 74 | 1.6512 | 0.57 | 0.1075 | 0.15 | 0.3225 | 0.4125 | 0.465 |
| No log | 37.72 | 76 | 1.6212 | 0.5875 | 0.11 | 0.1525 | 0.3375 | 0.42 | 0.4775 |
| No log | 38.72 | 78 | 1.5995 | 0.595 | 0.1125 | 0.15 | 0.34 | 0.4275 | 0.4975 |
| No log | 39.72 | 80 | 1.5713 | 0.5925 | 0.115 | 0.15 | 0.35 | 0.4375 | 0.525 |
| No log | 40.72 | 82 | 1.5551 | 0.5875 | 0.115 | 0.15 | 0.3525 | 0.4375 | 0.5325 |
| No log | 41.72 | 84 | 1.5276 | 0.59 | 0.115 | 0.15 | 0.35 | 0.4575 | 0.5425 |
| No log | 42.72 | 86 | 1.5050 | 0.5925 | 0.115 | 0.15 | 0.355 | 0.46 | 0.5425 |
| No log | 43.72 | 88 | 1.4871 | 0.595 | 0.1125 | 0.1525 | 0.3625 | 0.47 | 0.5625 |
| No log | 44.72 | 90 | 1.4712 | 0.5975 | 0.1125 | 0.1525 | 0.3675 | 0.4775 | 0.5525 |
| No log | 45.72 | 92 | 1.4615 | 0.5975 | 0.1125 | 0.155 | 0.365 | 0.4825 | 0.555 |
| No log | 46.72 | 94 | 1.4449 | 0.6075 | 0.1125 | 0.155 | 0.3625 | 0.4875 | 0.5575 |
| No log | 47.72 | 96 | 1.4273 | 0.6175 | 0.1125 | 0.155 | 0.365 | 0.5025 | 0.565 |
| No log | 48.72 | 98 | 1.4127 | 0.6225 | 0.1125 | 0.155 | 0.365 | 0.505 | 0.5725 |
| No log | 49.72 | 100 | 1.4005 | 0.63 | 0.1125 | 0.155 | 0.3675 | 0.5125 | 0.575 |
| No log | 50.72 | 102 | 1.3925 | 0.625 | 0.1125 | 0.155 | 0.37 | 0.5125 | 0.5725 |
| No log | 51.72 | 104 | 1.3847 | 0.6325 | 0.1125 | 0.155 | 0.38 | 0.5175 | 0.57 |
| No log | 52.72 | 106 | 1.3772 | 0.64 | 0.1125 | 0.155 | 0.38 | 0.515 | 0.57 |
| No log | 53.72 | 108 | 1.3679 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5825 |
| No log | 54.72 | 110 | 1.3595 | 0.6475 | 0.1125 | 0.155 | 0.3775 | 0.525 | 0.5825 |
| No log | 55.72 | 112 | 1.3544 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.58 |
| No log | 56.72 | 114 | 1.3515 | 0.6425 | 0.1125 | 0.155 | 0.375 | 0.52 | 0.5875 |
| No log | 57.72 | 116 | 1.3500 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5925 |
| No log | 58.72 | 118 | 1.3495 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.59 |
| No log | 59.72 | 120 | 1.3493 | 0.645 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.5875 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BadreddineHug/donut-base-ocr8 | BadreddineHug | 2023-07-06T11:44:55Z | 72 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-06T11:39:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-ocr8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ocr8
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ashish9947/open_llama_7b_tech_support | Ashish9947 | 2023-07-06T11:40:31Z | 3 | 1 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T11:37:10Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
vineetsharma/speecht5_finetuned_voxpopuli_nl | vineetsharma | 2023-07-06T11:38:52Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-06T08:55:10Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5219 | 4.3 | 1000 | 0.4787 |
| 0.5047 | 8.61 | 2000 | 0.4660 |
| 0.4922 | 12.91 | 3000 | 0.4621 |
| 0.4898 | 17.21 | 4000 | 0.4609 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Binaryy/llama_travel_test | Binaryy | 2023-07-06T11:38:27Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T11:37:12Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Bugsys0302/undbob | Bugsys0302 | 2023-07-06T11:36:05Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T11:26:03Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-aochildes-len-16plus3k | NasimB | 2023-07-06T11:23:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T09:25:04Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16plus3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16plus3k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.724 | 0.29 | 500 | 5.6363 |
| 5.3775 | 0.59 | 1000 | 5.2004 |
| 5.0346 | 0.88 | 1500 | 4.9510 |
| 4.7464 | 1.18 | 2000 | 4.8047 |
| 4.5856 | 1.47 | 2500 | 4.6783 |
| 4.4827 | 1.77 | 3000 | 4.5731 |
| 4.3449 | 2.06 | 3500 | 4.5046 |
| 4.1625 | 2.36 | 4000 | 4.4513 |
| 4.1272 | 2.65 | 4500 | 4.3964 |
| 4.0896 | 2.95 | 5000 | 4.3426 |
| 3.8678 | 3.24 | 5500 | 4.3447 |
| 3.8287 | 3.54 | 6000 | 4.3129 |
| 3.8096 | 3.83 | 6500 | 4.2830 |
| 3.6796 | 4.12 | 7000 | 4.2909 |
| 3.5376 | 4.42 | 7500 | 4.2842 |
| 3.5279 | 4.71 | 8000 | 4.2744 |
| 3.511 | 5.01 | 8500 | 4.2679 |
| 3.3374 | 5.3 | 9000 | 4.2774 |
| 3.3374 | 5.6 | 9500 | 4.2775 |
| 3.3392 | 5.89 | 10000 | 4.2771 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
m-aliabbas1/dqn-SpaceInvadersNoFrameskip-v4 | m-aliabbas1 | 2023-07-06T11:17:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T11:16:52Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 807.50 +/- 374.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga m-aliabbas1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
maxkskhor/ppo-SnowballTarget | maxkskhor | 2023-07-06T11:09:41Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-06T11:09:35Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: maxkskhor/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
asenella/mmnist_JNFconfig_resnet_seed_0_ratio_0_c | asenella | 2023-07-06T11:06:29Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-04T20:51:08Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Zain6699/intent-classifier-establish_credibility | Zain6699 | 2023-07-06T11:03:55Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T11:02:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-establish_credibility
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-establish_credibility
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0714
- Accuracy: 0.9854
- F1: 0.9581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zain6699/intent-classifier-common_ground | Zain6699 | 2023-07-06T11:02:19Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T11:00:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-common_ground
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-common_ground
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0360
- Accuracy: 0.9938
- F1: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
arham061/finance-alpaca | arham061 | 2023-07-06T11:01:10Z | 134 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T10:26:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finance-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finance-alpaca
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zain6699/intent-classifier-call_to_action | Zain6699 | 2023-07-06T11:00:48Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T10:59:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-call_to_action
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-call_to_action
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
- Accuracy: 0.9875
- F1: 0.9639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RogerB/afro-xlmr-base-finetuned-kintweetsB | RogerB | 2023-07-06T10:59:26Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T09:53:42Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-base-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-finetuned-kintweetsB
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4711 | 1.0 | 900 | 2.2431 |
| 2.3238 | 2.0 | 1800 | 2.2116 |
| 2.2725 | 3.0 | 2700 | 2.1590 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zain6699/intent-classifier-personalization | Zain6699 | 2023-07-06T10:59:17Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T10:57:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-personalization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-personalization
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0776
- Accuracy: 0.9833
- F1: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zain6699/intent-classifier-incentive_for_connecting | Zain6699 | 2023-07-06T10:57:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T10:56:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: intent-classifier-incentive_for_connecting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent-classifier-incentive_for_connecting
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0398
- Accuracy: 0.9917
- F1: 0.9740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeshamMamdouh/mt5-small-sum-fine-tuned | HeshamMamdouh | 2023-07-06T10:56:23Z | 61 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-06T10:54:24Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-sum-fine-tuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4015
- Validation Loss: 1.8725
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 52.1786 | 49.3355 | 0 |
| 47.3638 | 45.1305 | 1 |
| 43.6563 | 42.4522 | 2 |
| 41.1214 | 39.5774 | 3 |
| 38.3601 | 37.3437 | 4 |
| 35.8017 | 34.8478 | 5 |
| 32.6174 | 32.5370 | 6 |
| 30.4399 | 30.7220 | 7 |
| 28.8299 | 29.1744 | 8 |
| 27.1342 | 26.7656 | 9 |
| 25.2765 | 24.9835 | 10 |
| 23.8467 | 23.1296 | 11 |
| 22.4239 | 21.5926 | 12 |
| 21.1438 | 20.8646 | 13 |
| 20.5646 | 21.1405 | 14 |
| 18.9753 | 20.3101 | 15 |
| 18.8306 | 19.6189 | 16 |
| 17.6935 | 18.5195 | 17 |
| 17.0993 | 17.4238 | 18 |
| 16.1595 | 16.1143 | 19 |
| 15.4946 | 15.2814 | 20 |
| 15.0521 | 14.1193 | 21 |
| 14.1677 | 13.0559 | 22 |
| 13.7239 | 12.5135 | 23 |
| 12.8212 | 11.2606 | 24 |
| 12.3333 | 10.5911 | 25 |
| 11.5663 | 9.7681 | 26 |
| 11.2357 | 9.7545 | 27 |
| 10.3757 | 8.6039 | 28 |
| 10.2910 | 8.3155 | 29 |
| 9.5480 | 7.9911 | 30 |
| 9.1881 | 7.5866 | 31 |
| 8.7798 | 7.2611 | 32 |
| 8.1529 | 6.9730 | 33 |
| 7.7057 | 6.6302 | 34 |
| 7.6724 | 6.2149 | 35 |
| 7.1820 | 5.9264 | 36 |
| 6.8348 | 5.9113 | 37 |
| 6.6185 | 5.7169 | 38 |
| 6.3897 | 5.2028 | 39 |
| 6.0808 | 4.8902 | 40 |
| 6.0517 | 4.5248 | 41 |
| 5.4217 | 4.1892 | 42 |
| 5.2464 | 4.1719 | 43 |
| 5.0986 | 4.1922 | 44 |
| 4.6939 | 3.9863 | 45 |
| 4.7763 | 3.7674 | 46 |
| 4.5684 | 3.4746 | 47 |
| 4.2996 | 3.1692 | 48 |
| 4.3434 | 3.0116 | 49 |
| 4.1290 | 2.9261 | 50 |
| 3.8491 | 2.8621 | 51 |
| 4.0837 | 2.7301 | 52 |
| 3.7118 | 2.6694 | 53 |
| 3.6294 | 2.6649 | 54 |
| 3.5421 | 2.6036 | 55 |
| 3.3884 | 2.8563 | 56 |
| 3.3752 | 2.4984 | 57 |
| 3.4596 | 2.4091 | 58 |
| 3.2075 | 2.4850 | 59 |
| 3.2646 | 2.3415 | 60 |
| 2.9473 | 2.3363 | 61 |
| 2.9364 | 2.2778 | 62 |
| 2.9130 | 2.2466 | 63 |
| 2.8123 | 2.1061 | 64 |
| 2.9697 | 2.1859 | 65 |
| 2.9565 | 2.0596 | 66 |
| 2.7610 | 2.2746 | 67 |
| 2.7636 | 2.2090 | 68 |
| 2.5776 | 2.0910 | 69 |
| 2.5245 | 1.9330 | 70 |
| 2.5848 | 1.9169 | 71 |
| 2.4724 | 1.8993 | 72 |
| 2.6195 | 1.8815 | 73 |
| 2.4015 | 1.8725 | 74 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.12.1
|
cerindam30/tugas_akhir | cerindam30 | 2023-07-06T10:56:16Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T08:20:21Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: tugas_akhir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tugas_akhir
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nikolamilosevic/distil_bert_uncased-finetuned-relations | nikolamilosevic | 2023-07-06T10:55:05Z | 152 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-14T11:08:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distil_bert_uncased-finetuned-relations
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_bert_uncased-finetuned-relations
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4191
- Accuracy: 0.8866
- Prec: 0.8771
- Recall: 0.8866
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|
| 1.1823 | 1.0 | 232 | 0.5940 | 0.8413 | 0.8273 | 0.8413 | 0.8224 |
| 0.4591 | 2.0 | 464 | 0.4600 | 0.8607 | 0.8539 | 0.8607 | 0.8555 |
| 0.3106 | 3.0 | 696 | 0.4160 | 0.8812 | 0.8763 | 0.8812 | 0.8785 |
| 0.246 | 4.0 | 928 | 0.4113 | 0.8834 | 0.8766 | 0.8834 | 0.8796 |
| 0.2013 | 5.0 | 1160 | 0.4191 | 0.8866 | 0.8771 | 0.8866 | 0.8808 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.13.0.dev20220614
- Datasets 2.2.2
- Tokenizers 0.11.6
|
linlinlin/full-fine-tuning | linlinlin | 2023-07-06T10:53:14Z | 180 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-06T10:22:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: full-fine-tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full-fine-tuning
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
zzzAI19/niji-LoRA_v2.0 | zzzAI19 | 2023-07-06T10:50:30Z | 0 | 6 | null | [
"region:us"
] | null | 2023-07-05T12:57:22Z | (7/6)Uploaded chilled_remix version.
Additional learning was done with illustrations generated by niji-journey to create LoRA. The trigger word is "jis".
It is based on various models. Files are different for each base model.
niji2:Own merge model zzzmix(https://huggingface.co/zzzAI19/zzzmix)
niji2animekawa:AnimeKawa (https://civitai.com/models/87661/animekawa?modelVersionId=93295)
niji2anything:Anything v5 (https://civitai.com/models/9409/or-anything-v5ink)
niji2beautifulRealistic:Beautiful Realistic Asians (https://civitai.com/models/25494/brabeautiful-realistic-asians-v2)
niji2chilooutmix:chilloutmix (https://civitai.com/models/6424/chilloutmix)
niji2counterfeit:counterfeit v3 (https://huggingface.co/gsdf/Counterfeit-V3.0)
niji2sukumizumix:SukumizuMix (https://huggingface.co/AkariH/SukumizuMix)
niji2chilledremix、niji2chilledreversemix:chilled_remix(https://huggingface.co/sazyou-roukaku/chilled_remix)
I also plan to use TrinArt, Irismix, and openjourney as base models in the future.
LoRA with these models as base models will be uploaded tomorrow.
I would recommend a LoRA strength of 0.7. We also consider step 6 to be a good choice.
Sample images can be found at
https://ai-drawing.net/en/2023/07/05/introduction-of-niji-lora-v2-0/
(7/6)chilled_remix版をアップロードしました。
niji・journeyにより生成されたイラストで追加学習し、LoRAを作りました。トリガーワードは「jis」です。
色々なモデルをベースにしています。ベースモデルごとにファイルが異なります。
niji2:自作マージモデルzzzmix(https://huggingface.co/zzzAI19/zzzmix)
niji2animekawa:AnimeKawa (https://civitai.com/models/87661/animekawa?modelVersionId=93295)
niji2anything:Anything v5 (https://civitai.com/models/9409/or-anything-v5ink)
niji2beautifulRealistic:Beautiful Realistic Asians (https://civitai.com/models/25494/brabeautiful-realistic-asians-v2)
niji2chilooutmix:chilloutmix (https://civitai.com/models/6424/chilloutmix)
niji2counterfeit:counterfeit v3 (https://huggingface.co/gsdf/Counterfeit-V3.0)
niji2sukumizumix:SukumizuMix (https://huggingface.co/AkariH/SukumizuMix)
niji2chilledremix、niji2chilledreversemix:chilled_remix(https://huggingface.co/sazyou-roukaku/chilled_remix)
また今後、TrinArt、Irismix、openjourneyもベースモデルにする予定です。
これらのモデルをベースモデルとしたLoRAは明日アップロードする予定です。
LoRA強度は0.7を推奨します。 またステップ6が良いと考えます。
サンプル画像は以下のページにあります。
https://ai-drawing.net/2023/07/05/niji-lora-v2-0%e3%81%ae%e7%b4%b9%e4%bb%8b/
---
license: creativeml-openrail-m
---
|
Bugsys0302/trbrma | Bugsys0302 | 2023-07-06T10:46:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T10:41:02Z | ---
license: creativeml-openrail-m
---
|
Norod78/TinyStories-3M-val-Hebrew | Norod78 | 2023-07-06T10:42:58Z | 120 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"he",
"dataset:Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-28T05:30:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: TinyStories-3M-val-Hebrew
results: []
license: mit
language:
- he
datasets:
- Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT
widget:
- text: היה פעם
- text: פעם אחת
- text: החתול שלך מאוד חמוד ו
pipeline_tag: text-generation
---
# TinyStories-3M-val-Hebrew
This model is trained upon [Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT)
Dataset is a machine translation of [TinyStoriesV2-GPT4-valid.txt](https://huggingface.co/datasets/roneneldan/TinyStories/blob/main/TinyStoriesV2-GPT4-valid.txt) by [roneneldan](https://huggingface.co/roneneldan)
Trasnlation was done using [this](https://huggingface.co/datasets/Norod78/TinyStoriesV2-GPT4-valid_heb-lineByLine-EoT/blob/main/translate_file_2.py) script
Original [Dataset](https://huggingface.co/datasets/roneneldan/TinyStories) containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
## Model description
A very very small model (8M params) tarined on a very small dataset
A [sample inference script](https://huggingface.co/Norod78/TinyStories-3M-val-Hebrew/blob/main/TinyStories-3M-val-Hebrew-inference.py) is available
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 300.0
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
- ### Parameter calculation
```
def gpt_params(seq_len, vocab_size, d_model, num_heads, num_layers):
""" Given GPT config calculate total number of parameters """
ffw_size = 4*d_model # in GPT the number of intermediate features is always 4*d_model
# token and position embeddings
embeddings = d_model * vocab_size + d_model * seq_len
# transformer blocks
attention = 3*d_model**2 + 3*d_model # weights and biases
attproj = d_model**2 + d_model
ffw = d_model*(ffw_size) + ffw_size
ffwproj = ffw_size*d_model + d_model
layernorms = 2*2*d_model
# dense
ln_f = 2*d_model
dense = d_model*vocab_size # note: no bias here
# note: embeddings are not included in the param count!
total_params = num_layers*(attention + attproj + ffw + ffwproj + layernorms) + ln_f + dense
return total_params
#gpt2 = dict(seq_len = 1024, vocab_size = 50257, d_model = 768, num_heads = 12, num_layers = 12)
gpt2 = dict(seq_len = 256, vocab_size = 50259, d_model = 128, num_heads = 16, num_layers = 8)
result = gpt_params(**gpt2)/1e6
print(result) #Prints 8.019584
``` |
qwopqwop/danbooru-llama-qlora | qwopqwop | 2023-07-06T10:38:48Z | 0 | 4 | null | [
"license:mit",
"region:us"
] | null | 2023-07-06T10:25:19Z | ---
license: mit
---
train code: https://github.com/qwopqwop200/llama-danbooru-qlora |
cardiffnlp/twitter-roberta-base-hate-multiclass-latest | cardiffnlp | 2023-07-06T10:37:08Z | 136 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"en",
"arxiv:2307.01680",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-09T22:40:52Z | ---
model-index:
- name: twitter-roberta-base-hate-multiclass-latest
results: []
language:
- en
pipeline_tag: text-classification
---
# cardiffnlp/twitter-roberta-base-hate-multiclass-latest
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) for multiclass hate-speech classification. A combination of 13 different hate-speech datasets in the English language were used to fine-tune the model.
## Classes available
```
{
"sexism": 0,
"racism": 1,
"disability": 2,
"sexual_orientation": 3,
"religion": 4,
"other": 5,
"not_hate":6
}
```
## Following metrics are achieved
* Accuracy: 0.9419
* Macro-F1: 0.5752
* Weighted-F1: 0.9390
### Usage
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-hate-latest")
model.predict('Women are trash 2.')
>> {'label': 'sexism'}
model.predict('@user dear mongoloid respect sentiments & belief refrain totalitarianism. @user')
>> {'label': 'disability'}
```
### Model based on:
```
@misc{antypas2023robust,
title={Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation},
author={Dimosthenis Antypas and Jose Camacho-Collados},
year={2023},
eprint={2307.01680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
AAOBA/RND-PyamidsRND | AAOBA | 2023-07-06T10:28:36Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-06T10:27:59Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chikoto/RND-PyamidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
soduhh/marian-finetuned-kde4-en-to-fr | soduhh | 2023-07-06T10:26:33Z | 61 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-05T14:32:51Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: soduhh/marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# soduhh/marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6854
- Validation Loss: 0.8044
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0627 | 0.8795 | 0 |
| 0.7968 | 0.8213 | 1 |
| 0.6854 | 0.8044 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Tiru8055/rl_course_vizdoom_health_gathering_supreme | Tiru8055 | 2023-07-06T10:24:27Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T10:24:20Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.50 +/- 5.00
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Tiru8055/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
thirupathibandam/autotrain-phanik-gpt-neo-125m-self-72606138970 | thirupathibandam | 2023-07-06T10:01:36Z | 0 | 0 | null | [
"autotrain",
"text-generation",
"dataset:thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T10:00:49Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
datasets:
- thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self
co2_eq_emissions:
emissions: 0.03549660564532989
---
# Model Trained Using AutoTrain
- Problem type: Text Generation
- CO2 Emissions (in grams): 0.0355
## Validation Metrics
loss: 1.8581730127334595
|
blanchefort/rubert-base-cased-sentiment-mokoron | blanchefort | 2023-07-06T09:56:44Z | 129 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:RuTweetCorp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- ru
tags:
- sentiment
- text-classification
datasets:
- RuTweetCorp
---
# RuBERT for Sentiment Analysis of Tweets
This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/).
## Labels
0: POSITIVE
1: NEGATIVE
## How to use
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron')
model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted, dim=1).numpy()
return predicted
```
## Dataset used for model training
**[RuTweetCorp](https://study.mokoron.com/)**
> Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
|
ketong3906/opus-mt-en-zh-finetuned-eng-to-chn | ketong3906 | 2023-07-06T09:53:13Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-06T09:50:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-eng-to-chn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-eng-to-chn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 6.2769 | 0.8101 | 73.625 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ddmlproject/cassianatuzzi | ddmlproject | 2023-07-06T09:48:26Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T09:44:16Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### cassianatuzzi Dreambooth model trained by ddmlproject with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

.jpg)
.jpg)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpg)
|
GHonem/git-base-pokemon | GHonem | 2023-07-06T09:38:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-07-03T13:37:45Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0330
- Wer Score: 1.6516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.4022 | 4.17 | 50 | 4.7553 | 21.1384 |
| 2.7988 | 8.33 | 100 | 0.9177 | 10.7623 |
| 0.3496 | 12.5 | 150 | 0.0709 | 2.1170 |
| 0.0373 | 16.67 | 200 | 0.0327 | 1.3170 |
| 0.0142 | 20.83 | 250 | 0.0316 | 1.5031 |
| 0.0069 | 25.0 | 300 | 0.0330 | 1.6516 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-opt-13b-w4-s0 | squeeze-ai-lab | 2023-07-06T09:29:03Z | 0 | 0 | null | [
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:38:38Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
irfan62622/q-FrozenLake-v1-4x4-noSlippery | irfan62622 | 2023-07-06T09:29:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T09:28:58Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="irfan62622/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ireneli1024/bigbird-pegasus-large-pubmed-plos-finetuned | ireneli1024 | 2023-07-06T09:18:37Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"text-generation-inference",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-05T05:58:53Z | ---
license: other
language:
- en
metrics:
- rouge
tags:
- text-generation-inference
---
This is the finetuned model based on the [google/bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed) model.
The data is from BioLaySumm 2023 [shared task 1](https://biolaysumm.org/#data). |
Sekiraw/space_invaders | Sekiraw | 2023-07-06T09:16:19Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T12:58:30Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 251.50 +/- 28.46
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sekiraw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 200000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
megagonlabs/pilota_scud2query | megagonlabs | 2023-07-06T09:12:07Z | 0 | 0 | null | [
"t5",
"text2text-generation",
"pilota",
"ja",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2023-06-27T09:20:44Z | ---
language: ja
tags:
- t5
- text2text-generation
- pilota
license: apache-2.0
---
# Pilota model for scud2query
A model for [Pilota](https://github.com/megagonlabs/pilota) trained with <https://github.com/megagonlabs/scud2query>.
- ``scud``
- Fine tuned model of [t5-base-japanese-web (with Byte-fallback, 8K)](https://huggingface.co/megagonlabs/t5-base-japanese-web-8k)
- The original model is distributed in [the Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- ``scorer``
- Fine tuned model of [LINE DistilBERT Japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese)
- The original model is distributed in [the Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Usage
1. Install [Pilota](https://github.com/megagonlabs/pilota)
2. Prepare inputs
- Command
```bash
echo -e '部屋に冷蔵庫があると良い。レンタカーサービスがあるホテルを【customer】が希望する。' | python -m pilota.convert.plain2request | tee input.jsonl
```
- Output
```json
{"context":null,"utterance":"部屋に冷蔵庫があると良い。レンタカーサービスがあるホテルを【customer】が希望する。","sentences":null,"meta":{}}
```
3. Feed it to Pilota
- Command
```console
pilota -m megagonlabs/pilota_scud2query --batch_size 1 --outlen 60 --nbest 1 --beam 5 < input.jsonl
```
- Output (Formatted by ``jq .``)
```json
[
{
"scuds_nbest": [
[
"部屋に冷蔵庫がある。"
]
],
"original_ranks": [
0
],
"scores": [
0.9769772589206696
],
"scores_detail": [
{
"OK": 0.9232575297355652,
"incorrect_none": 0.0034886503126472235,
"lack": 0.023772092536091805,
"limited": 0.013821585103869438,
"untruth": 0.04332486167550087
}
],
"sentence": "部屋に冷蔵庫があると良い。"
},
{
"scuds_nbest": [
[
"レンタカーサービスがあるホテルだ。"
]
],
"original_ranks": [
0
],
"scores": [
0.9876023113727569
],
"scores_detail": [
{
"OK": 0.9586743712425232,
"incorrect_none": 0.004059707745909691,
"lack": 0.0024317132774740458,
"limited": 0.007630097679793835,
"untruth": 0.04025880992412567
}
],
"sentence": "レンタカーサービスがあるホテルを【customer】が希望する。"
}
]
```
## License
Apache License 2.0
|
Abzu/mpt-30b-instruct-q8 | Abzu | 2023-07-06T09:11:11Z | 20 | 5 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2023-06-30T07:59:31Z | ---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MosaicML's MPT-30B-Instruct 8-bit
These files are .safetensors format model files for [MosaicML's MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct).
## How to convert
```python
# Load the model
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
start_time = time.time()
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True,
load_in_8bit=True
)
# Filter the non-tensor items
def filter_dict(dictionary):
filtered_dict = {key: value for key, value in dictionary.items() if "weight_format" not in key}
return filtered_dict
new_state_dict = filter_dict(model.state_dict())
# Save the 8-bit model
model.save_pretrained('mpt-30b-instruct-8bits', state_dict=new_state_dict, safe_serialization=True)
```
## How to use
```python
# Load the model
model = transformers.AutoModelForCausalLM.from_pretrained(
'mpt-30b-instruct-8bits',
trust_remote_code=True,
)
```
## Prompt template
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction
{prompt}
### Response
```
# MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.66% |
| cot_gsm8k | 3.36 M | 7.67% |
| dialogsum | 0.1 M | 0.23% |
| dolly_hhrlhf | 5.89 M | 13.43% |
| duorc | 7.8 M | 17.80% |
| qasper | 8.72 M | 19.90% |
| quality | 11.29 M | 25.78% |
| scrolls/summ_screen_fd | 4.97 M | 11.33% |
| spider | 0.089 M | 0.20% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
``` |
squeeze-ai-lab/sq-opt-6.7b-w4-s0 | squeeze-ai-lab | 2023-07-06T09:11:07Z | 0 | 0 | null | [
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:28:51Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-2.7b-w4-s0 | squeeze-ai-lab | 2023-07-06T08:59:11Z | 0 | 0 | null | [
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:28:08Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-2.7b-w3-s0 | squeeze-ai-lab | 2023-07-06T08:58:57Z | 0 | 0 | null | [
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-06T08:28:00Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
nkpz/open_llama_7b_qlora_uncensored-gptq | nkpz | 2023-07-06T08:47:29Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T08:32:58Z | ---
license: apache-2.0
---
4-bit quantized files for [georgesung/open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored)
Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --save_safetensors /my/output/file.safetensors |
mrizalf7/xlm-r-qa-squad-train1.1-1 | mrizalf7 | 2023-07-06T08:37:14Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-05T05:11:59Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad-train1.1-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad-train1.1-1
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad](https://huggingface.co/mrizalf7/xlm-r-qa-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2419 | 1.0 | 636 | 3.1678 |
| 2.8486 | 2.0 | 1272 | 3.2826 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
smaciu/bee-wings-classifier | smaciu | 2023-07-06T08:32:55Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2023-06-24T10:25:38Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
fadliaulawi/dummy-model | fadliaulawi | 2023-07-06T08:25:22Z | 59 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T07:56:53Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Miroslava357/1 | Miroslava357 | 2023-07-06T08:08:11Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-06T08:06:58Z | An intelligent man is sitting in an armchair at a desk reading a newspaper, glasses, blond curly short hair and short beard, blue eyes, feet on the floor, high detail, in pants and T-shirt, without background, png |
michaelwzhu/ShenNong-TCM-LLM | michaelwzhu | 2023-07-06T08:04:59Z | 0 | 12 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-05-05T13:18:59Z | ---
license: apache-2.0
---
# ShenNong-TCM-LLM
Repo for ShenNong-TCM-LLM (“神农”大模型,首个中医药大模型)
以ChatGPT、GPT-4等为代表的大语言模型(Large Language Model, LLM)掀起了新一轮自然语言处理领域的研究浪潮,展现出了类通用人工智能(AGI)的能力,受到业界广泛关注。
为推动LLM在中医药领域的发展和落地,提升LLM的在中医药方面的知识与回答医学咨询的能力,同时推动大模型赋能中医药传承,我们现推出**ShenNong**中医药大规模语言模型:
- 🚀 [ShenNong-TCM](https://github.com/michael-wzhu/ShenNong-TCM-LLM) :
- 这一模型的训练数据为[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)。
- ChatMed_TCM_Dataset以我们开源的[中医药知识图谱](https://github.com/ywjawmw/TCM_KG)为基础;
- 采用以实体为中心的自指令方法[entity-centric self-instruct](./src/entity_centric_self_instruct.py),调用ChatGPT得到11w+的围绕中医药的指令数据;
- ShenNong-TCM模型也是以LlaMA为底座,采用LoRA (rank=16)微调得到。微调代码与[ChatMed代码库](https://github.com/michael-wzhu/ChatMed)相同
同时,欢迎大家关注我们的其他医疗大模型开源项目
- 🚀 [ChatMed-Consult](https://huggingface.co/michaelwzhu/ChatMed-Consult) : 基于[中文医疗在线问诊数据集ChatMed_Consult_Dataset](https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset)的50w+在线问诊+ChatGPT回复作为训练集。模型主干为[LlaMA-7b](https://github.com/facebookresearch/llama),融合了[Chinese-LlaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)的LoRA权重与中文扩展词表,然后再进行基于LoRA的参数高效微调。我们将全部代码都进行了公开;
- 🚀 [ChatMed-MT](https://huggingface.co/michaelwzhu/ChatMed-MT) : ChatMed-Consult的多轮对话版本,对已有的开源中文问诊数据集进行LLM自动改造,使得医生回复文本更加具有共情性,也更贴心与详细,由此训练的LLM在患者/用户体验上会更好。
- 🚀 [PromptCBLUE中文医疗大模型评测基准](https://github.com/michael-wzhu/PromptCBLUE): 将[CBLUE](https://tianchi.aliyun.com/dataset/95414)基准进行改造为提示学习模式,形成对大模型的中文医疗知识与医疗文本处理能力的评测基准。PromptCBLUE旨在采用一个生成式大模型即可完成医疗NLP相关的各种不同任务,如病历结构化,问诊,病例文书撰写等。
## 更新
2023/6/25 🚀 开源[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)的v0.2版本,数据量达到11w+; 同时上传ShenNong-TCM模型checkpoint至[model](https://huggingface.co/michaelwzhu/ShenNong-TCM-LLM).
2023/6/21 🚀 开源[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)的v0.1版本,v0.2版本即将更新;
## 快速上手
如果同学们想要采用[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)进行大模型微调,可以参考[ChatMed代码库](https://github.com/michael-wzhu/ChatMed)的代码和训练脚本;
## 以实体为中心的自指令方法
[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)是完全开源的,可供社区成员们使用。
我们知道,垂直领域相较于通用领域的不同之处在于其一般是知识密集性的,而这些知识一般是围绕一些实体的。所以,我们提出实体为中心的自指令方法[entity-centric self-instruct](./src/entity_centric_self_instruct.py),即围绕垂直领域中的核心实体,以及各种不同的意图场景,进行指令的生成。
如果小伙伴们想要基于自己本地的知识库/知识图谱,进行entity-centric self-instruct. 这一部分的代码请参考[ShenNong-TCM-LLM代码库](https://github.com/michael-wzhu/ShenNong-TCM-LLM)。下载代码库后,对`src/entity_centric_self_instruct.py`文件进行相应的配置修改,运行类似于下面的命令(注意需要在代码文件中配置自己的api key):
```bash
python src/entity_centric_self_instruct.py your_KG_triples.txt <your_output_file.jsonl>
```
其中"your_KG_triples.txt"文件是知识图谱每个三元组写在txt文件形成的,参考[TCM-KG](https://github.com/ywjawmw/TCM_KG)或者[TCM-KG文件](./src/TCM-KG_triples.txt).
## 效果对比
通过使用[中医药指令数据集ShenNong_TCM_Dataset](https://huggingface.co/datasets/michaelwzhu/ShenNong_TCM_Dataset)对中文LlaMA-7b模型进行LoRA微调,可使得该模型在回复中医药相关问题方面获得明显的提升。这里我们展示了5个典型中医药问题下不同模型的回复。更多测试样例见[More test examples](https://github.com/michael-wzhu/ChatMed/blob/main/src/web_services/test_examples/ChatMed-Consult_test.json)
| 问诊问题 | 中文LlaMA-7b | ShenNong-TCM |
|------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 腹痛怎么办啊? | 建议您尽快就医,并根据医生的指导进行治疗。 | 如果您感到腹部疼痛,以下是一些可能有用的建议:\n1. 首先休息一下。躺下或坐下来可以减轻症状并帮助身体恢复力量。\n2. 如果您有任何药物过敏史或者正在服用其他药品,请务必咨询医生的意见后再使用这些药膏、止疼剂等缓解痛苦的方法。\n3. 在饮食方面,避免食用辛辣食物和油腻食品,尽量选择清淡易消化的食物来减少肠胃负担。同时多喝水以保持水分平衡。\n4. 如果你的情况比较严重并且持续时间较长,最好去医院就诊寻求专业治疗。 |
| 大便时带血,怎么办? | 出现这种情况可能是由于肠道内出血或炎症引起的。建议及时就医检查确诊并进行治疗。 | 如果您的大便中出现了血液,可能是由于以下原因之一引起的:\n\n1.痔疮或肛裂 - 这种情况通常会导致排尿和排泄物的疼痛、瘙痒和其他不适症状。出血量可能会有所不同,从轻微到严重不等。\n2.肠炎或其他肠道疾病- 这些病症可能导致腹泻或者便秘以及其他胃肠道问题。出血量也可能因病情而异。\n3.直肠息肉 - 这是一种在直肠内形成的良性肿瘤状结构,可以引起不同程度的腹痛和出血。\n4.结肠癌 - 这种癌症是常见的消化系统恶性肿瘤类型之一,常常会引起严重的腹部痛苦并伴有贫血等问题。\n5.脱垂子宫颈糜烂 - 如果您正在怀孕并且出现这种情况,这很可能是由于宫颈糜烂所致。这种病变需要及时治疗以避免感染和流产的风险。\n\n建议立即就医寻求医生帮助进行诊断和治疗。 |
| 可以给我介绍一些治疗太阳经风寒证的中药吗? | 针灸、推拿等中医疗法,以及草药汤剂都可以用来缓解这种症状。 | 当出现太阳经风寒证时,可以考虑以下几种常用的中药进行治疗。这些药物具有温阳散寒、祛湿止痛的作用,能够缓解症状并促进身体康复。\n\n1. 当归(Angelica sinensis)-有活血化瘀和补气养血作用;\n2. 白芷 (Bai Zi) - 清热解毒,宣通鼻窍;\n3. 金银花(Honeysuckle, Jin Xin Hua)-清热解毒,凉血消肿;\n4. 陈皮 (Chen Pi) - 理气健脾,行滞化痰;\n5. 桂枝 (Gui Zhi) - 发汗解表,调和营卫。\n\n需要注意的是,在使用任何一种中药之前应该咨询医生或中医师的意见以确保安全有效性。 |
| 我感觉浑身乏力,且出现壮热症状。请推荐中医诊断和治疗方法。 | 建议您到医院进行检查以确定病因并制定相应的治疗方案;同时可以采用中药调理身体、增强免疫力等方法缓解病情。 | 根据您的描述,您可能患有感冒或流感等疾病引起的身体不适。建议及时就医并进行中西医疗诊以确定确切的病因及治疗方案。以下是一些常用的中药方剂供参考:\n\n1. 金银花清瘟胶囊:具有解毒、抗病毒的作用,适用于风寒感冒或者流行性感冒等病症。\n\n2. 三黄石膏颗粒:可清热解毒、祛湿止痛,常用于暑湿伤阳所致的身体酸软无力。\n\n3. 五味子饮片:有温肺散寒、益气固表之功效,可用于体虚畏冷、易感恶风的情况。\n\n4. 当归生姜羊肉汤:有益血活络、温暖脾胃作用,适合因寒冷导致的身体疲倦以及食欲不振的人群食用。\n\n需要注意的是,以上药物仅作为辅助治疗措施,具体用药应在医生指导下使用。同时,保持充足休息、饮食清淡、多喝水也是缓解病情的重要方法之一。 |
| 我牙齿疼痛,同时伴有口臭,有什么方法缓解? | 建议您及时到口腔医院就诊。牙周炎、龋齿等疾病可能导致牙齿痛和口臭等问题出现 | 您可以尝试以下几种方式来减轻牙痛和减少口腔异味:\n\n1.漱口盐水。将一茶匙食醋加入一杯温水中,搅拌均匀后用来漱口即可。这有助于杀死细菌并减轻口气。\n\n2.使用含氟的洗发剂或去屑产品。这些产品的成分含有一定量的氟化物,可帮助清洁牙齿表面,防止龋齿形成。\n\n3.刷牙时用软毛刷轻轻按摩牙齿边缘以去除食物残渣。\n\n4.定期去看牙医进行检查和治疗。如果出现严重的蛀洞或其他问题,及时修复可能会避免更严重后果。\n\n5.注意饮食健康,少吃甜食、碳酸饮料等易导致蛀牙的食物。 |
我们可以看到:
- ShenNong-TCM的回答会更加具有人性关怀一些,而中文LlaMA-7b模型的回复可能会比较生硬;
- ShenNong-TCM相较于中文LlaMA-7b模型的回答会更加丰富,具有更多可行的建议,特别是可以有效根据症状推荐方剂/中草药。
## 免责声明
- 本项目相关资源仅供学术研究之用,严禁用于商业用途。
- ShenNong-TCM作为基于语言模型的智能助手,其不能代替中医/西医进行医学诊断和给出医学建议。如有需要,请咨询专业医生或前往医院就诊。
- ShenNong-TCM系列模型正在快速迭代中,模型权重会定期进行更新。
- ShenNong-TCM系列模型基于开源数据,其训练数据的质和量都是有限的,其掌握的中医知识肯定是存在各种各样的缺陷。我们将会不断进行改进和更新。
## 致谢
本项目基于开源项目进行开发,在此对相关项目和研究开发人员表示感谢。
- [LlaMA](https://github.com/facebookresearch/llama)
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Chinese-LlaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
- [ChatMed](https://github.com/michael-wzhu/ChatMed)
Logo中的"神农"形象是由[midjourney](http://midjourney.com)自动生成。
## Citation
如果你使用了本项目的模型,数据或者代码,请声明引用:
```bash
@misc{zhu2023ChatMed,
title={ChatMed: A Chinese Medical Large Language Model},
author={Wei Zhu and Xiaoling Wang},
year={2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/michael-wzhu/ChatMed}},
}
```
|
symanto/mpnet-base-snli-mnli | symanto | 2023-07-06T07:54:17Z | 136 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mpnet",
"text-classification",
"zero-shot-classification",
"en",
"dataset:SNLI",
"dataset:MNLI",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- SNLI
- MNLI
tags:
- zero-shot-classification
---
A cross-attention NLI model trained for zero-shot and few-shot text classification.
The base model is [mpnet-base](https://huggingface.co/microsoft/mpnet-base), trained with the code from [here](https://github.com/facebookresearch/anli);
on [SNLI](https://nlp.stanford.edu/projects/snli/) and [MNLI](https://cims.nyu.edu/~sbowman/multinli/).
Usage:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
import numpy as np
model = AutoModelForSequenceClassification.from_pretrained("symanto/mpnet-base-snli-mnli")
tokenizer = AutoTokenizer.from_pretrained("symanto/mpnet-base-snli-mnli")
input_pairs = [("I like this pizza.", "The sentence is positive."), ("I like this pizza.", "The sentence is negative.")]
inputs = tokenizer(["</s></s>".join(input_pair) for input_pair in input_pairs], return_tensors="pt")
logits = model(**inputs).logits
probs = torch.softmax(logits, dim=1).tolist()
print("probs", probs)
np.testing.assert_almost_equal(probs, [[0.86, 0.14, 0.00], [0.16, 0.15, 0.69]], decimal=2)
```
|
Technotech/opt-125m-4bit-128g | Technotech | 2023-07-06T07:51:47Z | 5 | 1 | transformers | [
"transformers",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-06-12T08:04:01Z | ---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
## OPT-125m-4bit-128g
OPT 125M, quantised to 4bit using AutoGPTQ, with groupsize 128g, no act order.
Good for testing AutoGPTQ with a small model download.
# Original Model Card
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-125m")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
xian79/Reinforce-CartPole-v1 | xian79 | 2023-07-06T07:51:38Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T07:51:27Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Abinaya/opt-1.3b-lora-summary | Abinaya | 2023-07-06T07:35:05Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T06:35:55Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "Abinaya/opt-1.3-b-lora"
config = PeftConfig.from_pretrained("Abinaya/opt-1.3b-lora-summary")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b")
model = PeftModel.from_pretrained(model, "Abinaya/opt-1.3b-lora-summary")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
```
## For inference to get summary
```
batch = tokenizer("Natural language processing is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
``` |
Word2vec/nlpl_224 | Word2vec | 2023-07-06T07:31:46Z | 0 | 0 | null | [
"word2vec",
"ukr",
"dataset:Ukrainian_CoNLL17_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:02:16Z | ---
language: ukr
license: cc-by-4.0
tags:
- word2vec
datasets: Ukrainian_CoNLL17_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 99884 corresponding to 299668196 tokens from the dataset `Ukrainian_CoNLL17_corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 10 and dimension of 200.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_224", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/224.zip |
Word2vec/nlpl_223 | Word2vec | 2023-07-06T07:31:31Z | 0 | 1 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_November_2021",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:01:57Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_November_2021
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 199430 corresponding to 2717675616 tokens from the dataset `English_Wikipedia_Dump_of_November_2021`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_223", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/223.zip |
Word2vec/nlpl_208 | Word2vec | 2023-07-06T07:30:26Z | 0 | 0 | null | [
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:25:40Z | ---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk ([email protected]) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_208", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/208.zip |
Word2vec/nlpl_206 | Word2vec | 2023-07-06T07:29:52Z | 0 | 0 | null | [
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:09:12Z | ---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk ([email protected]) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip |
Word2vec/nlpl_205 | Word2vec | 2023-07-06T07:29:34Z | 0 | 0 | null | [
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:04:52Z | ---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk ([email protected]) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Continuous Bag-of-Words with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_205", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/205.zip
|
Word2vec/nlpl_204 | Word2vec | 2023-07-06T07:29:15Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:56:31Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 998459 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_204", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/204.zip |
Word2vec/nlpl_200 | Word2vec | 2023-07-06T07:28:57Z | 0 | 0 | null | [
"word2vec",
"eng",
"dataset:English_Wikipedia_Dump_of_October_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:56:11Z | ---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: English_Wikipedia_Dump_of_October_2019
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249212 corresponding to 3530685741 tokens from the dataset `English_Wikipedia_Dump_of_October_2019`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 3 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_200", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/200.zip |
Word2vec/nlpl_186 | Word2vec | 2023-07-06T07:28:40Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Taiga_corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:55:53Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Taiga_corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249946 corresponding to 4867000000 tokens from the dataset `Taiga_corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_186", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/186.zip |
Word2vec/nlpl_184 | Word2vec | 2023-07-06T07:28:01Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_News",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:55:10Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_News
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249318 corresponding to 2550000000 tokens from the dataset `Russian_News`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_184", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/184.zip |
Word2vec/nlpl_183 | Word2vec | 2023-07-06T07:27:39Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:53Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 248118 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_183", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/183.zip |
Word2vec/nlpl_182 | Word2vec | 2023-07-06T07:27:18Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:36Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 248978 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_182", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/182.zip |
youyougu/test-01 | youyougu | 2023-07-06T07:06:18Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T06:53:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: test-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-01
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
afaan00733/my_awesome_model | afaan00733 | 2023-07-06T06:56:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T21:18:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6546
- Accuracy: 0.4737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6732 | 0.4737 |
| No log | 2.0 | 4 | 0.6546 | 0.4737 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rohanbalkondekar/spicy-caiman | rohanbalkondekar | 2023-07-06T06:55:23Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-06T06:48:59Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="BeRohan/spicy-caiman",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"BeRohan/spicy-caiman",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"BeRohan/spicy-caiman",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BeRohan/spicy-caiman" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/spicy-caiman --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Broonion/RLcourse-pb-cartport | Broonion | 2023-07-06T06:53:57Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T06:53:45Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RLcourse-pb-cartport
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Bugsys0302/fmmstrb | Bugsys0302 | 2023-07-06T06:46:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T06:40:45Z | ---
license: creativeml-openrail-m
---
|
JennnDexter/pokemon-lora | JennnDexter | 2023-07-06T06:44:42Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-12T06:24:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - JennnDexter/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
NasimB/gpt2-concat-aochildes-16plus6k | NasimB | 2023-07-06T06:39:38Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T04:47:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-16plus6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-16plus6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7265 | 0.3 | 500 | 5.6481 |
| 5.3801 | 0.59 | 1000 | 5.2065 |
| 5.0346 | 0.89 | 1500 | 4.9518 |
| 4.7589 | 1.19 | 2000 | 4.8123 |
| 4.6003 | 1.48 | 2500 | 4.6915 |
| 4.4941 | 1.78 | 3000 | 4.5806 |
| 4.3447 | 2.07 | 3500 | 4.5155 |
| 4.1761 | 2.37 | 4000 | 4.4640 |
| 4.1351 | 2.67 | 4500 | 4.4014 |
| 4.1043 | 2.96 | 5000 | 4.3576 |
| 3.8639 | 3.26 | 5500 | 4.3597 |
| 3.8432 | 3.56 | 6000 | 4.3266 |
| 3.8118 | 3.85 | 6500 | 4.2913 |
| 3.6736 | 4.15 | 7000 | 4.2957 |
| 3.5472 | 4.45 | 7500 | 4.2920 |
| 3.5398 | 4.74 | 8000 | 4.2794 |
| 3.507 | 5.04 | 8500 | 4.2806 |
| 3.3499 | 5.33 | 9000 | 4.2855 |
| 3.3504 | 5.63 | 9500 | 4.2851 |
| 3.3498 | 5.93 | 10000 | 4.2849 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hchung1017/aihub_012_streaming_transformer | hchung1017 | 2023-07-06T06:35:19Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"ko",
"dataset:aihub_012",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2023-07-06T06:33:08Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ko
datasets:
- aihub_012
license: cc-by-4.0
---
## ESPnet2 ASR model
### `hchung1017/aihub_012_streaming_transformer`
This model was trained by hchung1017 using aihub_012 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout f4d7fead71e2a99541a8d3d66d6e00a33d9e82df
pip install -e .
cd egs2/aihub_012/asr1
./run.sh --skip_data_prep false --skip_train true --download_model hchung1017/aihub_012_streaming_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Jun 22 19:10:44 KST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `f4d7fead71e2a99541a8d3d66d6e00a33d9e82df`
- Commit date: `Wed May 24 14:58:35 2023 -0400`
## exp/asr_train_asr_streaming_transformer_raw_ko_bpe5000_sp/decode_asr_streaming_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|3794053|89.3|9.3|1.3|1.5|12.1|29.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|17636048|94.6|3.1|2.4|1.7|7.2|29.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|4325914|87.8|8.3|3.9|1.5|13.8|29.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_transformer_raw_ko_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 32945
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- cer_ctc
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 35000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/train_sp/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/dev/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁I
- ▁YOU
- ''''
- S
- ▁WHAT
- ▁A
- ▁IT
- ▁TO
- ▁IS
- ▁THE
- ▁ARE
- ▁CAN
- ▁OKAY
- ▁YES
- ▁DO
- ▁THAT
- ▁SEE
- T
- ▁HE
- ▁HOW
- ▁ME
- ▁HAVE
- ▁MY
- ▁GOOD
- ▁REALLY
- ▁SO
- ▁FOR
- ▁AM
- ▁SURE
- ▁OH
- ▁GO
- ▁WHY
- ▁NO
- ▁YOUR
- ▁RIGHT
- ▁HELP
- ’
- ▁DON
- ▁NOT
- ▁HI
- ▁HERE
- ▁DID
- ▁LIKE
- ▁AND
- ▁TOO
- ▁SHE
- ▁THIS
- ▁HELLO
- M
- ▁KNOW
- ▁WANT
- RE
- ▁NEED
- ▁WILL
- ▁ABOUT
- ▁THERE
- ▁LET
- ▁OF
- ▁IN
- ▁BE
- ▁BUT
- ▁THINK
- ▁SOMETHING
- ▁LOOK
- ▁NOW
- ▁NICE
- ▁THEN
- ▁
- ▁WE
- ▁GREAT
- ▁THANK
- ▁WITH
- ▁TELL
- ▁PROBLEM
- ▁HER
- ▁GOING
- ▁WAS
- ▁DOING
- ▁ASK
- ▁THANKS
- ▁HEY
- ▁BACK
- ▁WRONG
- ▁THEY
- ▁ON
- ▁HIM
- ▁UP
- ▁AT
- LL
- ▁WELL
- ▁GET
- ▁WHERE
- VERY
- ▁SOME
- ▁PEOPLE
- ▁ALL
- ▁MEAN
- ▁PLEASE
- ▁TIME
- ▁WHO
- ▁GOT
- ▁WELCOME
- ▁MAKE
- ▁COME
- ▁MEET
- ▁NEW
- ▁LOT
- ▁MOM
- ▁SAID
- ▁SHOULD
- ▁HAPPY
- ▁HIS
- ▁BUSY
- ▁BYE
- ▁QUESTION
- ▁SAY
- ▁TAKE
- ▁MORE
- ▁SORRY
- ▁IDEA
- ▁OUT
- ▁FINE
- ▁PLAY
- ▁ANY
- ▁AGAIN
- ▁BECAUSE
- ▁FROM
- ▁AN
- ▁WHEN
- ▁TRY
- ▁HAS
- ▁TODAY
- ▁READY
- ▁HOPE
- ▁GIVE
- ▁BIG
- ▁FRIEND
- ▁WRITE
- ▁EAT
- ▁ONE
- ▁BAD
- ▁MUCH
- ▁SOON
- ▁MANY
- ED
- ▁THEM
- ▁ANGRY
- ▁LATER
- ING
- ▁MAYBE
- ▁DAD
- ▁FIND
- ▁DOWN
- ▁WORRY
- ▁SHOW
- ▁COURSE
- ▁DAY
- ▁SOUNDS
- ▁DOES
- ▁STRANGE
- ▁TALK
- ▁FUN
- ▁REMEMBER
- ▁ANYTHING
- ▁BUY
- ▁LETTER
- ▁JUST
- ▁MADE
- ▁READ
- ▁CANNOT
- ▁WANTS
- ▁WOW
- ▁DIDN
- ▁IF
- ▁GLAD
- ▁WAY
- ▁MUST
- ▁SCHOOL
- ▁BOOK
- ▁LOOKING
- ▁TOLD
- ▁NAME
- ▁HEAR
- ▁TOY
- ▁TRUE
- ▁TEACHER
- ▁US
- ▁WORK
- ▁TWO
- ▁SONG
- ▁HARD
- ▁LOVE
- ▁THINGS
- ▁SING
- ▁BETTER
- ▁HOME
- ▁LINKER
- ▁UNDERSTAND
- ▁LOOKS
- ▁KIND
- ▁HOUSE
- LUE
- ▁DRESS
- ▁BY
- ▁BEST
- ▁LONG
- ▁NEWS
- ▁WENT
- ▁HAPPENED
- ▁OLD
- ▁KEEP
- ▁NEXT
- ▁CHECK
- D
- ▁SPECIAL
- ▁USE
- ▁LIKES
- ▁EVERYTHING
- ▁FEEL
- ▁ROBOT
- ▁SAD
- ▁PLEASURE
- ▁JOE
- ▁COOL
- ▁TOMORROW
- ▁LUCK
- ▁DOESN
- ▁BOX
- ▁AROUND
- ▁HOMEWORK
- ▁ALWAYS
- ▁MORGAN
- ▁PUT
- ▁THESE
- ▁GAVE
- ▁HEARD
- ▁WAIT
- ▁PRESENT
- ▁SOMEONE
- ▁PARTY
- ▁BIRTHDAY
- ▁RANDY
- ▁FRIENDS
- ▁MONEY
- ▁DONE
- ▁CAR
- ▁COFFEE
- ▁MUSIC
- ▁BEN
- ▁BEEN
- ▁STILL
- ▁GREEN
- ▁STAR
- ▁PERSON
- ▁WERE
- ▁STORY
- ▁ELSE
- ▁IDEAS
- ▁TOGETHER
- ▁MILK
- ▁WOULD
- ▁SOUND
- ▁THAN
- ▁TALKED
- ▁EVERY
- ▁NEEDS
- ▁SAW
- ▁HAIR
- ▁CHANGE
- ▁WORRIED
- ▁EASY
- ▁FOOD
- ▁DOG
- VE
- ▁CONCERT
- ▁MAKING
- ▁MONSTER
- ▁BOY
- ▁PHOTO
- ▁SCARY
- ▁RED
- ▁BROTHER
- ▁FIRST
- ▁DANCE
- ▁BEFORE
- ▁PRETTY
- ▁DRINK
- ▁WISH
- ▁HARRY
- ▁CALM
- ▁CAT
- ▁WEAR
- ▁BLUE
- ▁MESSAGE
- ▁TRUST
- ▁ONLY
- ▁HAD
- ▁THREE
- ▁AWAY
- ▁MIND
- ▁MAKES
- ▁GRANDMOTHER
- ▁WATCH
- ▁EMMA
- ▁AMY
- ▁TIRED
- ▁CLASS
- ▁MAN
- ▁DAN
- ▁COULD
- ▁BRING
- ▁SMALL
- ▁ANYWAY
- ▁OUR
- ▁ROOM
- ▁AFTER
- ▁BELIEVE
- ▁BOOKS
- ▁TEN
- ▁DEVILMON
- ▁JOB
- ▁OVER
- ▁COMING
- ▁STOP
- ▁FUNNY
- ▁DIANA
- ▁TOYS
- ▁FAST
- ▁MORNING
- ▁NUMBER
- ▁NOTHING
- ▁TOWN
- ▁OPEN
- ▁OTHER
- ▁PHONE
- ▁CARE
- ▁LEAVE
- ▁CONTEST
- ▁WOODY
- ▁THINKING
- Y
- ▁ANOTHER
- A
- ▁ENGLISH
- ▁SICK
- ▁BRAVE
- ▁TROY
- ▁EATING
- ▁SLEEP
- ▁THEIR
- ▁SELL
- ▁DELICIOUS
- ▁OFF
- ▁WATER
- ▁PICTURE
- ▁CAME
- ▁EVERYONE
- ▁PAPER
- ▁PARK
- ▁PAINT
- ▁SHOP
- ▁CREAM
- ▁TV
- ▁BOUGHT
- ▁CAREFUL
- ▁ROBBY
- ▁FOUND
- ▁STONE
- ▁SISTER
- ▁HURRY
- ▁BAG
- ▁WAKE
- ▁SYRUP
- ▁DRAW
- ▁ENERGY
- ▁SHOES
- ▁IMPORTANT
- ▁NEVER
- ▁LISTEN
- ▁WON
- ▁DOOR
- ▁POP
- ▁LAST
- ▁DIFFERENT
- ▁FISH
- ▁SAVE
- ▁HEALTHY
- ▁UNCLE
- ▁NIGHT
- UCH
- ▁PLACE
- ▁DARK
- ▁GUESS
- ▁LATE
- ▁PIE
- N
- ▁PRACTICE
- ▁MONICA
- ▁ANYONE
- ▁READING
- ▁COLOR
- ▁SALLY
- ▁BLACK
- ▁MOVIE
- ▁TROUBLE
- ▁COLD
- ▁STUDY
- ▁LITTLE
- ▁WHITE
- ▁CHEER
- ▁SCARED
- ▁POSTER
- ▁TALKING
- ▁TEACH
- ▁WALK
- ▁CAKE
- ▁INTO
- ▁FIGHT
- ▁ALREADY
- ▁SLEEPY
- ▁STRONG
- ▁OLIVIA
- ▁CALL
- ▁WROTE
- ▁ICE
- ▁OR
- ▁SCOTT
- ▁LIBRARY
- ▁NANCY
- ▁LUMY
- ▁HAT
- ▁YET
- ▁ALEX
- ▁SHORT
- ▁CLOTHES
- ▁YESTERDAY
- ▁FAVORITE
- ▁SWEET
- ▁FIVE
- ▁HOLD
- ▁LUNCH
- ▁PLAYING
- ▁GARY
- ▁HANDS
- ▁LEFT
- ▁ASKED
- ▁CHEESE
- ▁FACE
- ▁BORROW
- ▁SPEAK
- ▁INTERESTING
- ▁MAY
- ▁BEAR
- ▁SIGN
- ▁SHADOW
- ▁FLOWERS
- ▁PINO
- ▁ERIN
- ▁FOREST
- ▁GAME
- ▁MR
- ▁WANTED
- ▁RUN
- ▁SPELL
- ▁PEN
- ▁SHOPPING
- ▁COOK
- ▁DAYS
- ▁BED
- ▁BEAUTIFUL
- ▁MUSEUM
- ▁CLEAN
- ▁REST
- ▁SAME
- ▁DOCTOR
- ▁YOURSELF
- ▁DINNER
- ▁DANGEROUS
- ▁SECRET
- ▁STORE
- ▁TREE
- ▁MIGHT
- ▁MAYOR
- ▁CHARLIE
- ▁PIZZA
- ▁FOUR
- ▁SIR
- ▁SEEN
- ▁TURN
- ▁ENJOY
- ▁CLARA
- ▁ANYTIME
- ▁LIVE
- ▁LOST
- ▁SANDRA
- ▁DURING
- ▁MYSELF
- ▁TALL
- ▁MINE
- ▁CHOOSE
- ▁TOOK
- ▁WAITING
- ▁S
- ▁SUNNY
- ▁SINGING
- ▁ACADEMY
- ▁AHEAD
- ▁HURT
- ▁CLOCK
- ▁PAINTING
- ▁RAN
- ▁ALONE
- ▁USED
- ▁PLAN
- ▁THEATER
- ▁HAND
- ▁WEEK
- ▁CATCH
- ▁SEND
- ▁CUBE
- ▁ERIC
- ▁WOOD
- ▁HOT
- ▁DEVILMONS
- ▁FREE
- ▁STAY
- ▁PROMISE
- ▁RULE
- ▁HUNGRY
- ▁WORKING
- ▁HAPPEN
- ▁VIKI
- ▁FAMILY
- ▁CHICKEN
- ▁FORGET
- ▁YELLOW
- ▁BROWN
- ▁VACATION
- ▁KELLY
- ▁JACK
- ▁SINGER
- ▁HAMMER
- ▁SAYS
- ▁TRAIN
- ▁FIX
- ▁CUTE
- ▁EVEN
- ▁SANTA
- ▁SLEEPING
- ▁BUS
- ▁BARBECUE
- ▁AGREE
- ▁COULDN
- ▁MISS
- E
- ▁GRACE
- ▁TRASH
- ▁BABY
- ▁LUMA
- ▁CHILDREN
- ▁EXCUSE
- ▁DPOP
- ▁OUTSIDE
- ▁ORDER
- ▁MATTER
- ▁RIDE
- ▁SUMMER
- ▁CLOSE
- ▁MOVE
- ▁JUICE
- ▁TOUCH
- ▁CARD
- ▁THOSE
- ▁HAIRSTYLE
- ▁RICH
- ▁BREAK
- ▁ANYMORE
- ▁TRIP
- ▁EYES
- ▁LEARN
- IC
- ▁YOUNGER
- ▁SMELLS
- ▁CHRIS
- ▁ITEMS
- ▁STONES
- ▁CUT
- ▁STUDENT
- ▁CALLED
- ▁SHINE
- ▁ATE
- ▁PERFECT
- ▁BETIA
- ▁MOVING
- LY
- ▁FIRE
- ▁D
- ▁CHRISTMAS
- ▁RUNNING
- ▁LINE
- ▁JACKET
- ▁WHICH
- ▁GIFT
- ▁SMILE
- ▁WEARING
- ▁STELLA
- ▁SEVEN
- ▁ANSWER
- ▁YEAR
- ▁MOST
- ▁WENDY
- RA
- ▁BALL
- ▁THING
- ▁FIFTY
- ▁YOUNG
- ▁FRONT
- ▁LIKED
- ▁WINDOW
- ▁BEING
- ▁RICE
- ▁HOBBY
- ▁BRUCE
- ▁ALVIN
- ▁CHAIR
- ▁ELEVEN
- ▁INTERVIEW
- ▁TRUMPET
- ▁DRAWING
- ▁WHILE
- ▁HAV
- ▁NEWSPAPER
- ▁WRITING
- ▁FRUIT
- ▁BEHIND
- ▁EVENT
- ▁HAVEN
- ▁BELLOW
- ▁YEARS
- ▁DIV
- ▁VICTORIA
- ▁SENT
- ▁STYLE
- ▁LUNA
- ▁AUNT
- ▁DREAM
- ▁PICTURES
- ▁LEO
- ▁QUESTIONS
- ▁PRICE
- ▁APPLE
- ▁SCHEDULE
- ▁TABLE
- ▁PLANT
- ▁BELL
- ▁SUSAN
- ▁SHIRT
- ▁GRANDFATHER
- ▁EXPENSIVE
- ▁GUYS
- ▁THOUGHT
- ▁OSCAR
- ▁TIMES
- ▁ACTUALLY
- ▁CHANCE
- ▁PAY
- ▁WASH
- ▁JUGGLING
- ▁JULIA
- ▁MAKEUP
- ▁PIANO
- ▁GOES
- ▁QUIZ
- ▁OFTEN
- ▁THIRTY
- ▁SMART
- ▁WEEKEND
- ▁CHOCOLATE
- ▁BATHROOM
- ▁CANDY
- ▁SPEECH
- ▁FEELING
- ▁RADIO
- ▁HECTOR
- ▁KNOWS
- ▁GRANDMA
- ▁SEEM
- ER
- ▁START
- ▁PENCIL
- ▁SUNDAY
- ▁WORD
- ▁MOUSE
- ▁PLAYGROUND
- ▁BREAD
- ▁MAGIC
- ▁CD
- ▁BROKEN
- ▁COLIN
- ▁DIRTY
- ▁MOTHER
- ▁DESK
- ▁BORING
- ▁SOUP
- ▁ONCE
- ▁WORKED
- ▁COUNT
- ▁EXCITED
- ▁PARADE
- ▁GUITAR
- ▁PM
- ▁FINISH
- ▁BLOCK
- ▁FISHING
- ▁VOICE
- ▁ROGER
- ▁WORKS
- ▁PLAYER
- ▁GLASSES
- ▁LAB
- ▁SIGH
- ▁LOVES
- ▁MODEL
- ▁EXERCISE
- ▁O
- ▁POINT
- ▁SWIMMING
- ▁MARKET
- ▁NOTE
- ▁SECOND
- ▁LUCKY
- ▁BROKE
- ▁CAVE
- ▁SHALL
- ▁KID
- ▁HANG
- ▁MICHAEL
- ▁DANCING
- ▁COM
- ▁MASK
- TING
- ▁KYLE
- ▁FRIDAY
- ▁MELOD
- ▁DOUGLAS
- ▁ENOUGH
- ▁LEARNED
- ▁ALICE
- ▁NEWSPAPERS
- ▁NEAR
- ▁GIRL
- ▁LAURA
- ▁BANK
- ▁ORANGE
- ▁HEART
- ▁SNACKS
- ▁BANANA
- ▁AFRAID
- ▁NOISE
- ▁AARON
- ▁SIDE
- ▁POSSIBLE
- ▁ISN
- ▁UPSET
- ▁KATHY
- ▁ENTER
- ▁STATUE
- ▁FAVOR
- ▁CAPSULE
- ▁CLUB
- ▁BORED
- ▁STREET
- ▁FAR
- ▁BROUGHT
- ▁HENRY
- ▁BRIAN
- ▁FLOOR
- ▁RECORD
- ▁SUN
- ▁BORN
- ▁GONE
- ▁ELEPHANT
- ▁FATHER
- ▁BEAT
- ▁MISTAKE
- NY
- ▁MEGAN
- ▁JIN
- ▁CARL
- ▁FACTORY
- ▁HORSE
- ▁STANLEY
- ▁WIN
- ▁AFTERNOON
- ▁LIVED
- ▁HIGH
- ▁LEAVING
- ▁MINUTES
- ▁WALL
- ▁SURPRISE
- ▁DAVID
- ▁TWENTY
- ▁BIRD
- ▁NICK
- ▁REASON
- ▁OWN
- ▁STEVE
- ▁LADY
- ▁COMES
- ▁STATION
- ▁DOLL
- ▁JADE
- ▁STAND
- ▁FAMOUS
- ▁PLAYED
- ▁TSHIRT
- ▁HUEY
- ▁SEA
- ▁SIX
- ▁REPORT
- ▁POPULAR
- ▁PICK
- ▁TONY
- ▁TINA
- ▁KIDS
- ▁WEATHER
- ▁TREES
- ▁TIFFANY
- ▁WONDERFUL
- ▁RING
- ▁SOMEWHERE
- ▁LIGHT
- ▁NOSE
- ▁AUDREY
- ▁CAMERA
- ▁GARDEN
- ▁SOCCER
- ▁PIG
- ▁FRESH
- ▁NOBODY
- ▁AMANDA
- ▁SURPRISED
- ▁STOPPED
- ▁CITY
- ▁KOREAN
- ▁HISTORY
- ▁STUDENTS
- ▁COOKING
- L
- ▁LOUD
- ▁LOSE
- ▁PINK
- ▁LIE
- ▁CRAYONS
- ▁HEALTH
- ▁HANDWRITING
- ▁JOIN
- ▁THROW
- ▁INFORMATION
- ▁DIFFICULT
- ▁SOMETIMES
- ▁BIKE
- ▁WOMAN
- ▁FLOWER
- ▁WORDS
- ▁GHOST
- ▁RICKY
- R
- ▁TEETH
- ▁SAYING
- ▁PIECE
- ▁DR
- ▁CHANGED
- ▁SIT
- ▁ARTICLE
- ▁ARM
- ▁BECOME
- ▁MONKEY
- ▁YEAH
- ▁JUDY
- ▁FOLLOW
- ▁ALSO
- ▁GAMES
- ▁BAND
- ▁COMPUTER
- ▁ANDRE
- ▁EATS
- ▁MATH
- ▁EXACTLY
- ▁ART
- ▁JUMP
- ▁FOODS
- ▁PRESENTS
- ▁RABBIT
- ▁SMELL
- ▁HEAVY
- ▁SWIM
- ▁RICHARD
- ▁GRASS
- ▁BOTHER
- ▁PANTS
- ES
- ▁ALMOST
- ▁HELPING
- ▁ZOO
- ▁SHOULDN
- ▁FAN
- ▁EGGS
- ▁ELLA
- ▁RESTAURANT
- ▁CHIPS
- ▁BIGGER
- ▁MONDAY
- ▁CATS
- ▁STUDYING
- ▁TONIGHT
- ▁BRADY
- ▁SERIOUS
- ▁FORGOT
- ▁VISIT
- ▁BUILDING
- ▁SET
- ▁HANDSOME
- ▁CLAUS
- ▁RALPH
- ▁COMPANY
- ▁SEAT
- ▁ANDREW
- ▁WITHOUT
- EN
- ▁MEAT
- ▁BOARD
- ▁CLASSES
- ▁FLY
- ▁BIT
- ▁ANGELA
- ▁POLICE
- ▁BET
- ▁FINISHED
- ▁EITHER
- ▁SKY
- ▁POLIA
- ▁EIGHT
- ▁AMAZING
- ▁INSIDE
- ▁SATURDAY
- ▁DINOSAUR
- ▁DEVERYTHING
- ▁BRUSH
- ▁VIVIEN
- ▁BREAKFAST
- ▁QUICKLY
- ▁HEAD
- ▁CAROL
- ▁EACH
- ▁BANANAS
- ▁JAZZ
- ▁OWEN
- ▁LEAVES
- ▁HELPED
- ▁WINTER
- ▁REAL
- ▁TRUTH
- ▁RIVER
- ▁ROAD
- ▁ANNA
- ▁INTERESTED
- ▁EVERYBODY
- ▁HIMSELF
- ▁TAKES
- ▁LADDER
- ▁BOTH
- ▁CLASSROOM
- ▁STUDIED
- ▁HALL
- MAS
- ▁STARTED
- ▁THO
- ▁REFUND
- ▁EARLY
- ▁MARK
- ▁TRIED
- ▁CRY
- ▁CUP
- ▁DEAL
- ▁LEGS
- ▁PARTNER
- ▁NINE
- ▁MONTH
- ▁CRYSTAL
- ▁MRS
- ▁WHOM
- ▁QUIET
- ▁TICKET
- ▁TRYING
- ▁JELLY
- ▁TEST
- ▁OFFICE
- ▁BICYCLE
- ▁HOSPITAL
- ▁POOL
- ▁DOGS
- ▁LIVES
- ▁NOISY
- ▁TASTE
- ▁FEET
- ▁PASTA
- ▁HANS
- AL
- ▁PAST
- ▁PRIZE
- ▁KEY
- ▁COUPON
- ▁TIMMY
- ▁AREN
- ▁MEMO
- ▁TEACHE
- ▁PRACTICING
- ▁ANIMAL
- ▁MOUTH
- ▁WORLD
- ▁UNDER
- ▁WATCHING
- ▁FELL
- ▁DRIVE
- ▁BEACH
- ▁CLEAR
- ▁JOKES
- ▁GAVIN
- ▁ADD
- CLOCK
- ▁HELPER
- ▁JULIE
- ▁WEIRD
- ▁SINCE
- ▁MILLER
- ▁TIE
- ▁FRUITS
- ▁HOUR
- ▁ANIMALS
- ▁TWICE
- ▁WARM
- ▁LARGE
- ▁UNTI
- ▁JAMES
- ▁DOLLARS
- ▁STORIES
- ▁MEAL
- ▁APPLES
- ▁CRYING
- ▁DIET
- ▁HEADPHONES
- ▁MEMORI
- ▁COMPLIMENT
- ▁TRIANGLE
- ▁DIARY
- ▁TOWER
- ▁EYE
- ▁SALE
- ▁BUILT
- ▁CARROT
- ▁ORDERED
- ▁ITEM
- ▁SLOW
- ▁NAOMI
- ▁TUESDAY
- ▁SENSE
- ▁PARENTS
- ▁GIV
- ▁BUSINESS
- ▁EVER
- ▁TYLER
- ▁FORWARD
- ▁CELL
- ▁SHUT
- ▁COAT
- ▁PRINCE
- ▁HATE
- ▁PUPPET
- ▁FULL
- ▁WOULDN
- ▁TERRIBLE
- ▁CARDS
- ▁MAP
- ▁STAMP
- ▁SNACK
- ▁SNOW
- ▁RUBY
- ▁SLOWLY
- ▁EDDY
- ▁EASILY
- ▁LAZY
- ▁BLOCKS
- ▁EARS
- ▁COLORS
- ▁TTEOKBOKKI
- ▁CAREFULLY
- ▁MARRIED
- ▁VILLAGE
- ▁HEADACHE
- ▁MOUNTAIN
- ▁PETER
- ▁FAT
- ▁MARRY
- WEEN
- ▁RYAN
- ▁DISHES
- ▁JIM
- ▁FIELD
- ▁CINDY
- ▁FEW
- ▁STARS
- ▁UMBRELLA
- ▁GROW
- ▁FROG
- ▁RULER
- ▁BASKETBALL
- ▁PART
- ▁ORLANDO
- ▁CORRECT
- ▁GRANDPA
- ▁ADVICE
- ▁ARMS
- SE
- ▁PHOTOS
- ▁KICKBOARD
- ▁JACOB
- ▁DANGER
- ▁BOOTS
- ▁GIANT
- ▁BATH
- ▁VISITOR
- ▁PROMISED
- ▁SNAKE
- ▁GLASS
- ▁RAISE
- ▁SPICY
- ▁TURNED
- ▁MEETING
- ▁VIOLIN
- ▁MINUTE
- ▁DAISY
- ▁BUTTON
- ▁OTHERS
- ▁DELIVERY
- ▁WASN
- ▁JOGGING
- ▁SOFA
- ▁FINGERS
- ▁NICOLE
- ▁TALLER
- ▁RUNS
- ▁BENJAMIN
- ▁GOLD
- ▁LUCAS
- ▁SNOWMAN
- ▁LOVED
- ▁SANDWICH
- ▁STRAIGHT
- ▁AGAINST
- ▁BALLOONS
- ▁KEPT
- ▁CLOSED
- ▁PENS
- ▁MAX
- ▁LEG
- ▁FILL
- ▁QUIT
- ▁ANYBODY
- ▁JEFF
- ▁ANN
- ▁EVAN
- ▁MISSED
- ▁TAEKWONDO
- ▁JOY
- ▁PUSH
- ▁WOODWARD
- ▁ROSS
- ▁LISA
- ▁PULL
- ▁NECTAR
- ▁VASE
- ▁RABBITS
- ▁BOW
- ▁BUGS
- ▁SAFE
- GETTING
- ▁CASH
- ▁LAMP
- ▁DOLLS
- ▁YUMMY
- ▁MEDICINE
- ▁SPORTS
- ▁ENDS
- ▁BASEBALL
- ▁THROUGH
- ▁CENTER
- ▁FIGHTER
- ERS
- ▁PACKAGE
- ▁WORMS
- ▁SHAPE
- ▁DISAPPOINTED
- ▁PHILLIP
- ▁DINOSAURS
- ▁SALAD
- ▁HAMBURGER
- ▁COOKIES
- ▁PASS
- ▁CHEAP
- ▁STAGE
- ▁COLORED
- ▁TYPE
- ▁EVENING
- ▁CRIED
- ▁SHOWER
- ▁WALLET
- ▁FIFTEEN
- ▁HERO
- ▁USUALLY
- ▁GATE
- ▁TEAM
- ▁PLANE
- ▁DRESSES
- ▁SOLD
- ▁CRAYON
- LE
- ▁HIDE
- ▁BODY
- ▁MEN
- ▁HAIRSTYLES
- ▁BOAT
- ▁WONDER
- ▁RAIN
- ▁FEELS
- ▁NERVOUS
- ▁CHILD
- ▁MIRROR
- ▁BUG
- ▁LONGER
- ▁LOUIS
- ▁AIR
- ▁STOMACHACHE
- ▁ASKING
- ▁OWNER
- ▁KNEW
- ▁BELT
- I
- ▁MAGAZINE
- ▁HOP
- ▁SUGAR
- ▁END
- ▁TAKING
- ▁LIGHTS
- ▁EMPTY
- ▁PUPPY
- ▁DUCK
- ▁SUPERMARKET
- ▁APARTMENT
- ▁ADDRESS
- ▁MACHINE
- ▁JASON
- ▁CARRY
- ▁DRY
- ▁EXCITING
- ▁BOTTLE
- ▁RIDING
- ▁CHARCOAL
- ▁TRAVIS
- ▁UGLY
- ▁CAUGHT
- ▁PROBAB
- ▁PROJECT
- ▁LISTENING
- ▁JUGGLE
- ▁ROPE
- ▁BILL
- ▁HOURS
- ▁MOLLY
- ▁SOPHIE
- ▁WEARS
- ▁LIFE
- ▁CAFE
- ▁HURTS
- ▁RELAX
- ▁TED
- ▁COPY
- ▁COTTON
- ▁ALONG
- ▁OFFER
- ▁DATE
- ▁LI
- ▁YOUTUBE
- ▁JOKE
- ▁BARREL
- ▁DIED
- ▁SINGS
- ▁SEVERAL
- ▁TALENT
- ▁CARTER
- ▁PASSWORD
- ▁CASE
- ▁SCISSORS
- ▁YORK
- ▁FANTASTIC
- ▁CLOUDY
- ▁ROUND
- ▁BUILD
- ▁PRINCESS
- ▁RAINY
- ▁GRAPES
- ▁SKIRT
- ▁LION
- ▁FASTER
- ▁FASHION
- ▁AD
- ▁EXPLAIN
- ▁DOCK
- ▁MATCH
- ▁BOMB
- ▁STADIUM
- ▁WOODS
- ▁FALL
- ▁MAD
- ▁TRUCK
- ▁STEP
- ▁ANSWERS
- ▁KIDDING
- ▁MOON
- ▁BEAN
- ▁PICKED
- ▁LESSON
- ▁KNOWN
- ▁HAPPENING
- ▁BLUEBERRIES
- ▁SANDWICHES
- ▁BUTTER
- ▁BEDROOM
- ▁ABOVE
- ▁LEGO
- ▁HELENA
- ▁FOOTPRINT
- ▁SHIP
- ▁TAP
- ▁HILL
- ▁CHURCH
- ▁GOODBYE
- ▁LEMON
- ▁HUNDRED
- ▁COWARD
- ▁ARRIVED
- ▁WATERMELON
- ▁BOXES
- ▁FINALLY
- ▁MAIN
- ▁KEVIN
- BINGO
- ▁BONES
- ▁SPOKE
- ▁DONUTS
- ▁HENNA
- ▁LETTERS
- ▁PAM
- ▁LESS
- ▁WEDDING
- ▁POCKET
- ▁SHY
- ▁NOWHERE
- ▁MIC
- ▁NAMES
- ▁SONGS
- MED
- ▁DECIDED
- ▁KITCHEN
- ▁SHINING
- ▁LOVELY
- ▁SEASON
- ▁STEAK
- ▁DRUM
- ▁TEDDY
- ▁SHINY
- ▁GIRLS
- ▁AUDITION
- ▁ACTING
- ▁NECK
- ▁ROSA
- ▁SNEAKERS
- ▁SHOE
- ▁QUITE
- ▁HOTEL
- ▁LEATHER
- ▁WIND
- ▁COUSIN
- ▁JANET
- ▁ONIONS
- ▁DEAD
- ▁PROUD
- ▁PET
- ▁HELPFUL
- ▁TOILET
- ▁FORTY
- ▁JAKE
- ▁BUTTERFLY
- ▁KICK
- ▁BIRDS
- ▁ABROAD
- ▁TEA
- ▁STARTS
- ▁MEALS
- ▁AIRSHIPS
- ▁SOFT
- ▁MATT
- ▁BLANKET
- ▁WINDY
- ▁PLAYS
- ▁COVER
- ▁WEIGHT
- ▁PURPLE
- ▁HIDING
- ▁TAGS
- ▁F
- ▁WHATEVER
- ▁AIRSHIP
- ▁LIVING
- ▁MAT
- ▁KINDERGARTEN
- ▁POND
- ▁LAUNDRY
- O
- ▁NOTEBOOK
- ▁HELEN
- ▁SWEATER
- ▁TEACHING
- ▁FAULT
- ▁SQUARE
- ▁HONEST
- ▁LOUDER
- CAME
- ▁3
- ▁DROP
- ▁GUY
- ▁GIRLFRIEND
- ▁RAINING
- ▁SPIDER
- ▁FLYER
- ▁WATCHED
- ▁B
- ▁LOW
- ▁COUSINS
- ▁OLDER
- DY
- ▁ROCK
- ▁MOMENT
- ▁SHEET
- ▁LAUGH
- ▁BLUEBERRY
- ▁NEIGHBORHOOD
- ▁GRADE
- ▁STICKER
- ▁OPENING
- ▁ALRIGHT
- ▁OFFICER
- ▁PI
- ▁WEDNESDAY
- ▁BITE
- ▁CONTINUE
- TIME
- ▁SAIN
- ▁COSTUME
- ▁MOVED
- ▁BOOKCASE
- ▁DENTIST
- ▁STOPS
- ▁SAM
- ▁APRIL
- ▁THIRSTY
- ▁MOOD
- ▁PEA
- ▁ENTRY
- ▁SERVICE
- ▁ABLE
- ▁FRIED
- ▁W
- ▁FLASH
- ▁KATRINA
- ▁REPAIR
- ▁TI
- ▁GIMBAP
- NDA
- ▁ANNIVERSARY
- ▁NAMED
- ▁WRITTEN
- ▁CUSTOMERS
- ▁COLLECT
- ▁BONGOS
- ▁EGG
- ▁BAT
- ▁RIBS
- ▁SAT
- ▁RETURN
- LIGHT
- BACK
- CA
- NESS
- ▁FACES
- ▁CALLING
- ▁HOLIDAY
- ▁HOLE
- ▁MILLION
- ▁DELIVER
- ▁10
- ▁TAXI
- ▁HASN
- ▁MINDS
- ▁DONALD
- ▁MISTAKES
- ▁SPRING
- ▁MENTION
- ▁NEITHER
- ▁TOWEL
- ▁BEANS
- ▁WILLIAM
- ▁BRIGHT
- ▁STOMACH
- ▁CANDIES
- ▁BURGERS
- ▁FEAR
- ▁DECIDE
- ▁FEVER
- ▁FANS
- ▁STUDIO
- ▁LIAR
- ▁BREAKING
- ▁SLEPT
- ▁TAIL
- ▁BURGER
- ▁MOVIES
- ▁SMOKE
- ▁DANIEL
- ▁WAITER
- ▁PENCILS
- ▁CROSS
- ▁KOREA
- ▁GUARD
- ▁LEARNING
- ▁SUBWAY
- ▁CARS
- ▁SKIP
- ▁MIX
- ▁JEANS
- ▁LIST
- ▁POST
- ▁TRAVEL
- ▁BORROWED
- ▁AWESOME
- ▁RECORDER
- ▁FLOUR
- ▁COW
- ▁CAMPING
- ▁DRIVING
- ▁FELT
- ▁WINNER
- ▁CHARACTER
- ▁BALLOON
- ▁RIDDLE
- W
- FUL
- ▁NECKLACE
- ▁GLOVES
- ▁CHANGING
- ▁CRACKED
- ▁DROPPED
- ▁ROBERT
- ▁BAKERY
- ▁GRILL
- ▁INVITED
- ▁LAND
- ▁PORK
- ▁TELEPHONE
- ▁SKI
- ▁GUEST
- ▁AMBER
- ▁SHARP
- ▁KITE
- ▁DELI
- ▁MART
- ANNA
- ▁CIRCLE
- ▁FLYING
- ▁SHAKE
- ▁DANCER
- ▁POLICEMAN
- ▁DESSERT
- ▁SHOCK
- ▁BLOOD
- ▁MENU
- ▁BUMP
- ▁NOVEL
- ▁SKIN
- ▁SHOULDERS
- ▁MICHELLE
- ▁CROSSED
- ▁TICKETS
- ▁DRANK
- ▁OUTFIT
- ▁LAKE
- ▁PAINTER
- ▁ALIEN
- ▁RAINBOW
- ▁WORE
- ▁BAR
- ▁BROTHERS
- ▁DISH
- ▁SIMILAR
- ▁DISPLAY
- ▁GIRAFFE
- ▁FANCY
- ▁THIEF
- ▁HALLWAY
- ▁WAVE
- ▁CARROTS
- PE
- ▁ELDER
- ▁SOMEBODY
- ▁TRAFFIC
- ▁ACTOR
- ▁RUMORS
- ▁CHOSE
- ▁CAUS
- ▁DRESSED
- ▁ROSE
- ▁LYING
- ▁PANDA
- ▁PEAR
- ▁SUGGEST
- ▁DECISION
- ▁NOISES
- ▁TAKEN
- ▁GARLIC
- ▁CHINESE
- ▁ITCHY
- ▁SWORD
- ▁WAITED
- ▁NONE
- ▁SIZE
- ▁ACCEPT
- ▁CAPTAIN
- ▁GRAY
- ▁IDOL
- ▁SMALLER
- ▁USUAL
- ▁THOUSAND
- ▁LONELY
- ▁RETURNED
- ▁JENNY
- ▁PRACTICED
- ▁NEEDED
- ▁PAIN
- ▁RAP
- ▁THIN
- ▁EVERYWHERE
- ▁SUIT
- ▁BUSH
- ▁SON
- ▁COMPLIMENTS
- ▁FAILED
- ▁RUG
- ▁PAID
- ▁MANGO
- ▁BOYFRIEND
- ▁SCARF
- ELA
- ▁CROWD
- ▁ONLINE
- ▁GREW
- ▁SOCKS
- ▁SEAGULLS
- ▁USING
- ▁MELTED
- ▁OIL
- ▁ADULTS
- ▁KATE
- ▁WHISTLING
- ▁PRAY
- ▁POOR
- ▁SAUCE
- ▁PACKED
- ▁HATS
- ▁BUYING
- ▁AGO
- ▁SCIENCE
- ▁TUNNEL
- ▁DRESSING
- ▁MISSING
- ▁FESTIVAL
- ▁THURSDAY
- ▁PAIR
- ▁SITTING
- ▁SUITCASE
- ▁SHAPES
- ▁WILLY
- ▁HUGE
- ▁SHOUTED
- EVER
- ▁FAIR
- ▁TASTES
- ▁CAFETERIA
- ▁BINGO
- ▁BEGINS
- ▁DOLLAR
- ▁GRILLING
- ▁ALIVE
- ▁DINO
- ▁LIFT
- ▁TOP
- ION
- ▁STUFF
- ▁FROZEN
- ▁ACROSS
- ▁SEOUL
- ▁FRIES
- ▁TAUGHT
- ▁VIDEO
- ▁CREDIT
- ▁HAPPENS
- ▁RACE
- ▁TOUR
- ▁SPAGHETTI
- ▁SWING
- ▁INVITATION
- ▁COUNTRYSIDE
- ▁STAIRS
- ▁HIGHER
- ▁RANGER
- BAG
- ▁PULLED
- ▁LIPSTICK
- ▁VALLEY
- ▁NAP
- ▁FUTURE
- ▁SILENT
- ▁SPEAKER
- ▁GIVEN
- ▁JUMPING
- ▁AUTUMN
- ▁HOLDING
- ▁BOB
- ▁PLANNING
- ▁SUPPOSE
- ▁CLUES
- ▁ANSWERED
- ▁STICK
- ▁WASHED
- ▁CURLY
- ▁RUINED
- ▁SMILING
- ▁UNHAPPY
- ▁KIMBAP
- ▁CAUSE
- ▁CHUNKMONS
- ▁REPEAT
- STOOD
- ▁8
- ▁SHEEP
- ▁LOUDLY
- ▁SLIDE
- ▁KING
- ▁LIME
- ▁SKATING
- ▁SERVE
- ▁SAND
- ▁POWER
- ▁MUSICIANS
- ▁RESTROOM
- ▁SOMEDAY
- ▁GYM
- ▁GOD
- ▁COOKIE
- ▁NUMBERS
- ▁WARNING
- ▁CLASSMATE
- ▁COMPLAIN
- ▁LAUGHED
- ▁BEES
- ▁SAFELY
- ▁DESIGNER
- ▁ORANGES
- B
- ▁RETURNS
- ▁SPEAKING
- ▁GINA
- ▁MARTI
- ▁FEELINGS
- MAN
- ▁TULIP
- ▁BAZAAR
- ▁EMAIL
- ▁STRAWBERRY
- ▁PRESS
- ▁SALT
- ▁PHEW
- ▁COWS
- ▁ENTRANCE
- ▁LEAF
- ▁PAN
- ▁SOUR
- ▁DISEASE
- ▁OPENED
- ▁LUGGAGE
- ▁SWIMSUIT
- ▁PASSED
- ▁ALISON
- ▁SHOVELS
- ▁SENTENCES
- ▁GROUND
- ▁STAYING
- ▁SALES
- ▁JAM
- ▁WRAP
- ▁LATELY
- ▁SHRIMP
- ▁TWELVE
- ▁CHEAPER
- ▁CHECKING
- ▁SEAWEED
- ▁LO
- ▁TURTLES
- ▁DNN
- ▁WHE
- ▁ACT
- ▁LIZARD
- ▁SUCCEED
- ▁STRING
- ▁BASKET
- ▁HINT
- ▁VEGETABLES
- ▁FOOL
- ▁SHOT
- ▁ADULT
- ▁GREG
- ▁TASTY
- ▁FARM
- ▁LIPS
- ▁STARFISH
- ▁NAILS
- C
- ▁FR
- ▁TEARS
- ▁SUPERSTAR
- ▁CLEANS
- ▁HEAT
- ▁SILLY
- ▁WIG
- ▁BELLA
- WOKE
- ▁5
- ▁BOYS
- IVA
- ▁IMAGINE
- ▁LAUGHING
- ▁WASHING
- ▁FLAT
- ▁STICKERS
- ▁PRETTIER
- ▁KILL
- ▁FLIGHT
- ▁WOMEN
- ▁MOMMY
- ▁CAMP
- ▁MEMBERS
- ▁CUSTOMER
- ▁E
- ▁SINGERS
- 'ON'
- ▁CONTROL
- ▁TIGER
- ▁ZEBRA
- ▁IMPOSSIBLE
- ▁CONSOLE
- ▁CLUE
- ▁FOLD
- ▁BEE
- ▁ANDY
- ▁SEATS
- ▁POUND
- ▁SANG
- ▁DIAMOND
- ▁BATS
- ▁ARTIST
- ▁BABIES
- ▁GARAGE
- ▁INSTEAD
- ▁OLDFASHION
- ▁GIFTS
- ▁RODE
- BIG
- ▁MOUNTAINS
- ▁THUNDER
- ▁DONKEY
- ▁PIGEON
- ROOM
- ▁WORSE
- ▁HAMBURGERS
- ▁ERASER
- ▁TAMBOURINE
- ▁BREATH
- ▁ANNOYED
- ▁HALLOWEEN
- ▁KNOCK
- ▁STUPID
- ▁BANDAGE
- ▁PINEAPPLE
- OUT
- ▁SALTY
- ▁POTATO
- ▁MILES
- ▁COMMENT
- ▁TREATED
- ▁EAR
- ▁SLEDDING
- ▁VIOLET
- ▁BOTTLES
- ▁BRILLIANT
- ▁AUNTIE
- ▁SPEND
- ▁REACH
- ▁PAYING
- ▁APOLOGIZE
- ▁CORNER
- ▁FORGIVE
- ▁RELIEF
- ▁BEHAVE
- ▁DIE
- ▁PRETTIEST
- ▁H
- ▁HEN
- ▁POUR
- ▁NEEDLE
- ▁WORRIES
- ▁LARGER
- ▁CRAZY
- TYFIVE
- ▁DISCOUNT
- ▁HEADED
- ▁TWENTYFIVE
- ▁SOMETIME
- ▁REPORTER
- ▁FEED
- ▁KIMCHI
- ▁TENNIS
- ▁DOLPHIN
- ▁SUNGLASSES
- ▁THREW
- ▁COUNTRY
- ▁HUSBAND
- ▁JAPAN
- ▁TOMATOES
- ▁OK
- ▁POET
- ▁LUKE
- ▁LEND
- ▁LOWER
- ▁SHOVEL
- ▁AMERICA
- ▁BLOSSOMS
- OH
- K
- ▁SAFETY
- TALK
- ▁ASLEEP
- ▁MINER
- ▁PERIOD
- ▁STORYBOOK
- ▁BOWLS
- ▁DOUBT
- ▁MEMORY
- ▁SKINNY
- ▁EARTHQUAKE
- ▁2
- ▁BALLS
- ▁POTATOES
- ▁TROUSERS
- ▁WAR
- ▁FUR
- ▁RUMOR
- ▁CONGRATULATIONS
- ▁EASYGOING
- ▁NURSE
- ▁FLIES
- ▁GROWING
- ▁SMILES
- ▁CHOICE
- ▁ERASE
- ▁COMFORTABLE
- ▁GUIDE
- ▁PE
- ▁CLEVER
- ▁PEACE
- ▁AFTERSCHOOL
- ▁SOAP
- ▁POPCORN
- ▁SUNBLOCK
- ▁INVITE
- ▁AWAKE
- ▁FEMALE
- ▁HIKING
- ▁FOLLOWED
- ▁BUMPER
- ▁FILLED
- ▁HIPPO
- ▁COMEDIAN
- ▁SILK
- ▁COST
- IES
- ▁AWFUL
- ▁SIBLING
- ▁PIES
- ▁BURNING
- ▁CRASH
- ZIPPED
- ▁SPACE
- ▁LYRICS
- ▁HANDMADE
- ▁PER
- ▁ROUGH
- ▁THROWING
- ▁STATIONERY
- ▁WORM
- ▁PAGE
- ▁CLASSMATES
- ▁EXAM
- ▁FINAL
- ▁BLOW
- ▁CHINA
- U
- TH
- ▁BATTER
- ▁HONEY
- ▁MISTAKEN
- ▁DEPARTMENT
- GREAT
- ▁SHIRTS
- ▁COMPETITION
- ▁YOGURT
- MBER
- ▁DRINKS
- ▁WOLF
- ▁ISLAND
- ▁GROCER
- ▁SHARON
- ▁BREATHE
- ▁ANNOYING
- ▁LIED
- ▁SPA
- ▁KANGAROOS
- ▁ALIKE
- ▁PENGUIN
- ▁BRIGHTCOLORED
- ▁4
- ▁MESSAGES
- ▁INVENTION
- ▁WIPE
- BIRD
- ▁PRECIOUS
- ▁FLEW
- ▁CH
- ▁APART
- ▁MIDNIGHT
- ▁SPEN
- ▁SHELLS
- ▁GIN
- ▁NATURAL
- ▁THIRD
- ▁BADLY
- ▁PLATES
- ▁JOSHUA
- ▁MIDDLE
- ▁SWEAT
- ▁TOES
- ▁TIP
- ▁TEASE
- ▁BOOKSHOP
- ▁COUGHING
- ▁GUN
- ▁WASTE
- UMOR
- AR
- ▁SPREAD
- ▁GOAT
- ▁SPROUTS
- ▁BALLET
- ▁SNAKES
- ▁SCRATCHED
- ▁AMONG
- DANGER
- KGO
- NISH
- ▁FEE
- ▁JANE
- ▁TEMPER
- ▁CROWDED
- ▁BONO
- ▁CHEF
- ▁SAMPLE
- ▁LIONS
- ▁RULES
- ▁DREW
- ▁WORTH
- ▁MAGICIAN
- ▁GLUE
- ▁TOUGH
- ▁TOUCHE
- ▁TUNA
- ▁BAKE
- ▁LAUGHTER
- ▁HALF
- ▁HELMET
- ▁UH
- ▁COPIES
- ▁DIFFERENCE
- ▁FORK
- ▁STARTING
- ▁CRIES
- ▁SPROUT
- SNOW
- ▁SCARE
- ▁DRUMS
- ▁PHANTOPIA
- ▁VOUCHER
- ▁FARMER
- ▁CHANGES
- ▁SPILL
- AN
- ▁COMPLETELY
- ▁PRACTICES
- CHAIR
- ▁MISSE
- ▁RACHEL
- ▁SEEK
- EST
- ▁SISTERS
- ▁BLAME
- ▁PACK
- ▁BOIL
- ▁REQUEST
- ▁SH
- ▁WIRE
- ▁POT
- ▁ONION
- ▁CLOSER
- ▁MICE
- ▁SCRATCH
- ▁DUCKS
- THANK
- ▁RECEIVE
- ▁CABBAGE
- ▁SEEDS
- ▁JEJU
- ▁SUDDENLY
- RAY
- ▁KIWI
- ▁POWDER
- ERRY
- ▁MESSY
- ▁RID
- ▁CHAMPION
- ▁ARGUE
- ▁RECIPE
- ▁MICROPHONE
- ▁SCOLDED
- TRY
- ▁STRONGER
- ▁EXPECT
- ▁WEEKS
- AKER
- ▁JUMPED
- ▁RAINS
- ▁OREPHIA
- ▁PIGS
- LOSING
- ▁PRAYING
- ▁DUE
- ▁SOUTH
- ▁PUNCH
- ▁CREATIVE
- ▁FINISHING
- ▁HARMONI
- ▁CLOWN
- ▁SALON
- ▁SINK
- H
- ▁TOOL
- ▁ALARM
- VISION
- GY
- ▁FAIL
- ▁DRAWER
- ▁HAIRBAND
- ▁X
- ▁ARTICLES
- ▁DEEP
- ▁EARLIER
- ▁EXTRA
- ▁DOWNTOWN
- ▁LEFTHAND
- PTER
- ▁NOODLES
- ▁CONSIDER
- ▁ACCOUNT
- ▁DEER
- ▁SEAN
- RABBITS
- TY
- ▁CREAMS
- ▁LUCY
- ▁BOUN
- ▁HORNS
- EMENT
- ▁NOON
- ▁SMILED
- ▁NINETEEN
- ▁TURNS
- ▁MUFFLER
- ▁ROAR
- ▁HARDLY
- ▁SPELLED
- ▁SPOTS
- ▁SHORTS
- ▁JUMPS
- ▁RECENTLY
- ▁STOLEN
- ▁WITHIN
- ▁ENGLAND
- ▁PENDANT
- ▁MARY
- ▁AMUS
- ▁SERIOUSLY
- ▁FALLS
- ▁SPOONS
- ▁SAVED
- ▁STOLE
- ▁STUCK
- ▁G
- ▁DUMPLINGS
- ▁GERMAN
- ▁PLACES
- ▁OCARINA
- ▁QUEENSTEIN
- ▁BRANDON
- ▁DWARFS
- ▁TOFU
- ▁SPRAY
- PARD
- ▁CROSSING
- ▁PIGEONS
- ▁NOTICE
- CE
- LTY
- ▁BASEMENT
- ▁TABLET
- ▁COUPONS
- ▁PROGRAM
- ▁SOCK
- ▁GUI
- ▁NUT
- ▁OLIVE
- ▁PREFER
- ▁MUSHROOM
- ▁FIGHTING
- ▁DENERGY
- ▁STORAGE
- ▁POLITE
- IST
- ▁KICKBOARDS
- GAGE
- ▁DROWN
- ▁MANAGE
- ▁DRIVER
- P
- ▁WEEKENDS
- ▁SHOULDER
- ▁MUD
- ▁SEVENTY
- ALLY
- ▁POSTCARD
- ▁PIECES
- ▁HICCUPS
- ▁CHARACTERS
- ▁CLEANING
- ▁DIS
- ▁JG
- ▁JOSEPH
- ▁TITLE
- ▁CDS
- ▁BOSTON
- ▁BRACELET
- ▁PERMISSION
- ▁STEW
- ▁RAT
- ▁SKATE
- ▁CHEST
- ▁FOOT
- ▁CLIMB
- ▁AUDIENCE
- ▁DUFAR
- ▁GRANDPARENTS
- ▁FIT
- ▁TOUCHING
- ▁ELEPHANTS
- ▁TSHIRTS
- ▁APPOINTMENT
- ▁FOREVER
- ▁STARVING
- ▁LESSONS
- ▁COUPLE
- ▁TOTO
- ▁DRINKING
- ▁ARRIVE
- ▁GREE
- ▁SPOT
- ▁HELD
- ▁EARTH
- ▁DAUGHTER
- ▁SLICE
- ▁CASTLE
- ▁FEEDING
- ▁COVERED
- ▁FAM
- ▁AGE
- ▁AUSTIN
- ▁DEAR
- ▁NATI
- ▁CELEBRATE
- ▁MEATBALLS
- ▁STRETCH
- ▁SOLVE
- ▁USEFUL
- ▁SCAR
- DDING
- ▁ALLERG
- ▁RINGING
- ▁SAILING
- ▁SNOWING
- ▁LATEST
- ▁LIES
- ▁ACADEMIES
- ▁MUSICIAN
- ▁STA
- ▁FROGS
- ▁STOMP
- ▁KEYBOARD
- ▁FAIRY
- ▁CLAP
- ▁HAM
- ▁TOWARDS
- ▁RESERVATIONS
- ▁SHOUT
- SORRY
- ▁PUPPIES
- ▁WEAK
- ▁ORIGINAL
- ▁RESPECT
- ▁TABLES
- ▁COMPUTERS
- ▁TOWELS
- ▁CRAFTSMEN
- ▁ELE
- ▁REPAIRED
- ▁PRINT
- ▁BLOOM
- ▁WISELY
- ▁SCOLD
- ▁TWINKL
- ▁CANCEL
- ▁KIM
- ▁STAINED
- ▁LAP
- ▁DRI
- ▁SHARK
- ▁KANGAROO
- MENTARY
- THEY
- ▁DALLAS
- ▁SEESAW
- ▁WHISPER
- CAL
- ▁DWARF
- ▁SUNDAYS
- ALK
- ▁DOUBLE
- ▁SHAKING
- ▁PREPAR
- ▁YOYO
- ▁SKILLS
- ▁OCTOPUS
- ▁INSTRUMENTS
- ▁MAIL
- ▁ALIENS
- ▁JESSI
- ▁CHERRY
- ▁INCONVENIENCE
- ▁CERTAIN
- ▁BEEF
- CON
- 'OFF'
- ▁GATHERED
- ▁PRODUCTS
- CONVENIENCE
- ▁RESTAURANTS
- ▁MONKEYS
- ▁FIGURE
- ▁QUICK
- ▁GAIN
- ▁PENALTY
- ▁INLINE
- ▁INTRODUCE
- ▁OVERSLEPT
- ▁POL
- ▁HOWEVER
- ▁GORILLA
- ▁MEMBER
- ▁PLU
- ▁ANGER
- ▁AQUARIUM
- ▁GAS
- ELY
- ▁TIES
- ▁PUNISHED
- ▁CUCUMBERS
- ▁TINY
- ▁RISE
- ▁GHOSTS
- ▁WIFE
- MOND
- ▁RARE
- ▁BARN
- ▁SMELLY
- GAN
- ▁REASONS
- ▁BURNED
- ▁ANNOUNCE
- ▁CAPSULES
- ▁PICNIC
- ▁GLOVE
- FF
- RANCE
- ▁TREAT
- ▁JOG
- ▁BULLS
- ▁JJAKGUNG
- ▁PROVE
- ▁BAGS
- ▁RUDOLPH
- ▁MC
- ▁TRICKS
- RIOR
- ”
- ▁HAPPILY
- ▁REMIND
- ▁DIVER
- BE
- ▁HATES
- ▁SPOON
- ▁SIZES
- ▁THROAT
- ▁UN
- CRAFTS
- ▁BRIDGE
- ▁CONFUSED
- DONALD
- KEEPER
- ▁SIBLINGS
- ▁DENNIS
- ▁EMBARRASSED
- ▁PATRICK
- DWARFS
- ▁PREGNANT
- ▁VOTE
- ▁WHIPPED
- ▁10000
- ▁SUPPORT
- ▁TOOTH
- ▁STANDING
- ▁CLOSET
- ▁NEEDLES
- ▁SWEEP
- ▁RAISED
- ▁PEE
- ▁CONTACT
- ▁JEALOUS
- ▁SURVEY
- BOX
- ▁CROSSWALK
- ▁WALKING
- ▁SOP
- ▁SITE
- ▁OWE
- ▁FOURTEEN
- ▁PLANTING
- ▁CHANNELS
- ▁WIGGL
- ▁OURSELVES
- ▁SCENE
- ▁BAS
- ▁LETTUCE
- ▁NICKNAME
- ▁GRABB
- ▁ELEVATOR
- ▁COP
- ▁FALLING
- ▁DESERVE
- ▁FILM
- ▁SOPHOMORE
- ▁WOUND
- ▁PROTEST
- ▁PEACHES
- ▁CHILL
- ▁COURT
- ▁ROOF
- ▁CHARGE
- ▁FINGER
- ▁HANBOK
- ▁TAPDANCE
- ▁JAPANESE
- ▁MELON
- ▁BATTLE
- ▁LEAS
- ▁PARTS
- BATHING
- ▁CRUNCHY
- ▁PAUL
- ▁WHISTLE
- ▁CAKES
- ▁HEAL
- ▁SHELL
- ▁GUM
- ▁CARPENTER
- ▁HEAVILY
- ▁N
- ▁LEMONS
- ▁HARDER
- ▁ROW
- ▁STEAM
- ▁STUDIES
- ▁LOTTERY
- ▁BITTER
- ▁MOW
- ▁EATEN
- ▁SPORT
- ▁SHORTER
- ▁STEAL
- ▁GRADUATE
- ▁PUZZLE
- ▁CEREMONY
- ▁RAINCOAT
- ▁KISS
- HAP
- WAY
- ▁DEPART
- ▁LANGUAGE
- ▁BITTEN
- ▁BUSAN
- ▁L
- ▁TIGHT
- ▁BELOW
- ▁PERFECTLY
- KE
- ▁NATURE
- ▁MISUNDERST
- ▁CLOUD
- ▁DRAG
- ▁CARTOON
- ▁COCONUT
- ▁GOLF
- ▁THIRTEEN
- ▁DYING
- ▁PETE
- ▁MALL
- ▁BIN
- ICAL
- ▁ALIB
- ▁BREEZE
- ▁FRENCH
- ▁DATING
- ROW
- ▁WATERING
- ARD
- ▁DESERT
- ▁PRAISE
- ▁INTERNET
- ▁STRICT
- ▁MOSQUITOES
- TLE
- ▁SKILL
- ▁BEHAV
- ▁KTX
- ▁LONDON
- ▁TASTING
- ▁VAN
- ▁COUGHED
- ▁NICELY
- ▁HARM
- ▁BOOKSHELF
- ▁CRICKET
- ▁EDGE
- ▁PILLOW
- ▁RECTANGLE
- ▁STRESS
- ▁FOOTBALL
- ▁LAW
- ▁CHOPSTICKS
- WHAT
- ▁TWINS
- ▁AUSTRALIA
- ▁LAMB
- ▁MAYO
- ▁DESIGN
- ▁BLEW
- ▁GLORY
- ▁ROCKCLIMBING
- ▁DUTY
- ▁ENTERTAINMENT
- ▁THEMSELVES
- ▁YOG
- ▁BUCKET
- ▁BIRTH
- ▁FALSE
- ▁PATTERN
- ▁THREAD
- ▁SOLDIER
- ▁BATTERY
- ▁KNEES
- ▁HEADS
- ▁DELIVERED
- ROUTE
- ▁SIMPLE
- ▁WATERFALL
- ▁SWITCH
- ▁EFFORT
- ▁UNUSUAL
- ▁SLIPPED
- ▁REG
- ▁SUITS
- ▁CHANNEL
- ▁MINI
- ▁PLASTIC
- ▁RECOMMEND
- ▁RUBBER
- ▁THANKFUL
- ▁ROLL
- ▁SOLV
- ▁CLAPS
- ▁BUD
- ▁CINEMA
- ▁SHELF
- ▁LOSS
- ▁WOMANS
- ▁CANADA
- ▁EXPRESS
- ▁SHARING
- ▁LOOSEN
- ▁CHOCO
- ▁RUNNY
- ▁REPL
- ▁BOWL
- ▁FULLY
- ▁SOMEHOW
- ▁UNIQUE
- ▁CARES
- ▁NOODLE
- ▁JETLAG
- ▁LAPTOP
- ▁TOOTHPASTE
- ▁JON
- ▁AIRPORT
- ▁JOO
- YER
- ▁CAP
- ▁HOLLY
- ▁JOHNSON
- ▁ZERO
- ▁LEADER
- ▁OX
- ▁SQUEEZE
- PY
- GET
- ▁FIN
- ▁ZIP
- ▁SEPTEMBER
- ▁TEMPERATURE
- THIRTY
- ▁GOODLOOKING
- ▁GUAR
- ANTEE
- ▁LOG
- ▁WILD
- ▁BOOTH
- ▁PEPPERS
- ▁FORGOTTEN
- BALL
- ▁AB
- CALORIE
- ▁POLICY
- ICO
- ▁INCLUDED
- ▁LIGHTEN
- ▁BLAMED
- ▁LONGTIME
- OOD
- ▁JEAN
- ▁DECK
- ▁MANNER
- ALTH
- ▁PERSONALLY
- TRUCK
- PT
- ▁GUT
- ▁CRASHED
- ▁FLO
- ▁REACT
- ▁ABSENT
- KYO
- ▁BLUSH
- ▁DONATE
- DOCK
- ▁COMPLAINING
- ▁DESCRI
- ▁GEORG
- ▁RECOVER
- ▁WALNUT
- ▁LUNG
- ▁BUDDY
- ENSE
- ▁PASSES
- ▁PLUM
- HALF
- ▁SE
- ▁TURTLE
- ▁FRANC
- ▁KOALA
- ▁TURKEY
- ▁CARPET
- ▁ANYWHERE
- ▁R
- ▁SKIING
- ▁FOCUS
- ▁HARV
- ▁JANUARY
- ▁PRESIDENT
- ▁TWENTYONE
- ▁WRESTLE
- ▁CANCER
- ▁CHEATING
- ▁HOMEMADE
- ▁WEEKDAY
- ▁K
- THER
- ▁DREAMS
- ▁APPRECIATE
- ▁BRAIN
- ▁SAUSAGES
- SOMETHING
- GAR
- ▁SMOOTH
- ▁SLIM
- ▁FENCE
- JURY
- LIES
- ▁SPIDERS
- EADLINE
- EVEREST
- ▁SCORES
- ▁JOKING
- ▁REJECT
- ▁STEPMOTHER
- ▁CRIM
- ▁DIGGING
- ▁QUEEN
- ▁MALE
- ▁SNORES
- ▁EXPLAINED
- ▁HOUSEWORK
- ▁BEDTIME
- BEAT
- WORKING
- ▁SMELLING
- ▁GRAPE
- ▁INSTRUCTIONS
- ▁SUNSCREEN
- ▁WORKDAY
- ▁HOLES
- ATER
- UP
- RIDA
- ▁VINE
- ▁HERSELF
- ▁NIGHTMARE
- ▁SNAP
- ▁INSU
- ▁BURNS
- GIV
- ▁MOUNT
- ▁NEGATIVE
- ▁ADVANTAGE
- ▁DIFFICULTIES
- ▁7
- ▁REMAINS
- CHECK
- ▁TRAVELING
- ▁IMAGIN
- G
- ▁BENNY
- ▁JOHN
- ▁ATHLET
- ▁COOPE
- ▁DICTIONARY
- ▁HAPPINESS
- ▁RAPPER
- ▁SLIPPERY
- ▁SUNRISE
- ▁TAPDANCING
- ORABLE
- ▁NOTICING
- ▁WAITLIST
- ▁CUCUMBER
- FTH
- ▁GUESTS
- ▁COLLEGE
- ▁STOCK
- HH
- ▁TALE
- POP
- ▁MEXIC
- ▁FREEZER
- ▁REFUSE
- ▁SWIMMER
- ▁THOUGHTFUL
- DIVING
- WORKED
- ▁COURAGE
- ▁ERRANDS
- ▁LISTENED
- ▁GRUM
- ▁WEB
- ▁TWEL
- GED
- ▁CABIN
- ▁REHEARSAL
- ▁SKETCHBOOK
- ▁DAYCARE
- ▁PARTIES
- OBBY
- ▁SEAL
- WHERE
- ▁ROSES
- INE
- ▁ACCIDENT
- ▁PERSONALITY
- ▁SPECIFIC
- ▁RINGS
- ▁BLOOMED
- ▁AW
- YARD
- ▁ENTERED
- ▁BELLY
- ▁FUNNIER
- ▁NARROWMINDED
- USY
- ▁JOURNAL
- ▁JER
- ▁PRICES
- BREAK
- ▁BILLS
- SOLUT
- ▁11
- ▁REFILL
- ▁BAKED
- ▁ALPHABET
- CONNECTED
- ▁GOATS
- ▁WASHE
- ▁CHOP
- PHLE
- ▁NONSENSE
- ▁WADDL
- ▁PETS
- ▁DECORATE
- LUSH
- ▁FORGETTING
- ▁EMILY
- ▁BICYCLES
- ▁SHOWN
- ▁BUCK
- ▁BAIT
- ▁100
- ▁MOVER
- ▁HEL
- ▁WINNING
- ▁ROCKET
- ▁FANG
- ▁CA
- ▁DEPRESS
- ▁BEAUTY
- ▁DAILY
- ▁ENGINEER
- ▁MUFFIN
- ▁WRITER
- ▁OPINIONS
- ▁TRACKS
- ▁PAUSE
- ▁PUZZLED
- URE
- SEY
- ▁WRAPS
- ▁SOCIAL
- ▁GRADES
- ▁WARMLY
- ▁YOYOS
- ▁CHEW
- ▁BULGOGI
- ▁BARKING
- ▁SENTENCE
- ▁THOUGH
- ▁POO
- ALIAN
- ▁EVE
- ICED
- ▁RAIS
- ▁DISTURB
- ▁ITSELF
- ▁ORIGAMI
- ▁TISSUE
- ▁JOHNNY
- ▁BURN
- ▁COOKS
- ▁CANDLE
- ▁OBVIOUS
- ▁SANDPAPER
- ▁SUPPLIES
- ▁CHEWY
- ATIONS
- ▁FLAVOR
- ▁KIWIS
- ▁MASTER
- ▁YELLING
- ▁CUPS
- ▁BL
- LAINE
- ▁STIMULAT
- ▁TIRES
- ▁PRETEND
- ▁CLEANED
- ▁RUSSIA
- ▁FRECKLES
- ▁FART
- ▁CHEETAH
- ▁RUDE
- ▁TRAINS
- ▁LOTTE
- ▁PAGES
- ▁POSTCARDS
- ▁KEYS
- ME
- ▁BOOKSTORE
- ▁HOST
- ▁SHORTCUT
- ▁SHOOTS
- ▁OPINION
- ▁APRON
- ▁COPIED
- LLOWED
- ▁STICKY
- ▁PREPARE
- ▁HEADQUARTERS
- ▁REPAIRS
- ▁WHALE
- ▁POOP
- ▁RESEMBLE
- ▁SHARE
- ▁LOLL
- ▁EXERCISES
- ▁PROGRAMS
- ▁BLINK
- ▁FLAG
- ▁LAY
- ▁FASTEST
- ▁SNEEZE
- ▁ENDED
- J
- ▁MARKER
- HER
- ▁ASSISTANT
- ▁CURRY
- ▁PURSE
- ▁SLIPPERS
- ▁UNDERSTANDING
- ▁PIT
- ▁INDOOR
- ▁CROWN
- ▁CURIOUS
- ▁SYSTEM
- ▁CABLE
- ▁MOSQUITO
- ▁PHARMACY
- ▁EVERLAND
- ▁WINDOWS
- ▁BOOGER
- ▁TIRING
- ▁PAPERS
- ▁PEANUT
- ▁PARDON
- ▁AH
- ▁FOX
- ▁RESELL
- ▁RESULT
- ▁TWIST
- ▁SLED
- ▁TALLEST
- ▁RIBBONS
- ▁RECEI
- ▁SQUIRREL
- ▁CUTLET
- ▁HEIGHT
- ▁HURTING
- ▁TRAP
- ▁WRAPPER
- ITED
- ▁FRIGHTENED
- ▁PATIENT
- ▁CANCELED
- ▁SHELVE
- ▁NET
- OOPS
- ▁MESS
- ▁MERRY
- ▁PLATE
- ▁COMPLAINT
- ▁SITUATION
- ▁PARIS
- ▁STRAW
- ▁DIVIDE
- ▁GOAL
- ▁SHRIMPS
- X
- SPECIAL
- GOTTEN
- F
- ▁COLLECTED
- ▁AFFORD
- ▁HUNG
- ▁CHAMBER
- ▁AIRPLANE
- ▁CHA
- ▁WALLS
- ▁REGULAR
- ▁EXPERIENCE
- ▁PILOT
- ▁250
- ▁LEMONADE
- ▁FURTHER
- ▁RAC
- IN
- ▁SWALLOW
- ▁CLOSING
- ▁CLASSROOMS
- ACK
- ▁RENT
- ▁ADS
- ▁TENTH
- ▁FRY
- ▁HOTDOG
- ▁ANGEL
- ▁PEACH
- ▁HIDDEN
- ▁GOOSE
- ▁SMALLEST
- ▁ROCKS
- ▁COOKED
- ▁CORN
- ▁SIGNS
- ▁ANXIOUS
- ▁LIGHTNING
- ▁SNOWBALL
- ▁BESIDE
- ▁ANTS
- ▁ALLOWANCE
- ▁COUNTRIES
- ▁POUCH
- ▁SLIP
- ▁POEM
- ▁RAMEN
- ▁ROLLING
- ▁PATIENTS
- ▁SCREEN
- ▁PRESENTATION
- ▁CAST
- ▁FLUTE
- ▁HU
- ▁ZEBRAS
- ▁COMPARE
- ▁WIDE
- ▁FORSYTHIA
- ▁SENIOR
- ▁DONATED
- ▁FACTS
- RD
- ▁FOG
- ▁ROLE
- ▁PEARS
- ▁BUTTONS
- COME
- ▁HAIRCUT
- ONDE
- ▁ENV
- ▁CHASED
- THE
- '4'
- ▁TRACK
- ▁STRANGER
- ASOL
- ▁CHIN
- ▁PUBLI
- ▁DUN
- ▁JUNE
- ▁20
- ▁DOUGHNUT
- ▁DADDY
- PORT
- ▁EMBARRASSING
- ▁UNCOMFORTABLE
- ▁FOREHEAD
- ▁RELATIVES
- ▁DOODLE
- ▁GENTLEMAN
- ▁TAPE
- ▁BANKER
- ▁ACTRESS
- ▁SORT
- ▁REDESIGN
- ▁GRADERS
- ▁KICKING
- ▁LA
- UK
- ▁BARBECUING
- ▁BULLY
- RATE
- ▁JUN
- ▁KOREANS
- ▁CORPORATION
- ▁HEAVIE
- ▁IMPROVE
- ▁OCEAN
- ▁LG
- ▁LAYER
- ▁BRIGHTLY
- ▁CRABS
- ▁PAR
- ▁BLANK
- ▁CALENDAR
- ▁CROCODILE
- ▁SALARY
- ▁CHUSEOK
- ▁CUTEST
- ▁NOR
- ▁MYSTER
- ▁BEND
- ▁INCLUDE
- ▁EXCELLENT
- ▁PAINFUL
- ▁SKEWERS
- ▁CHEERING
- SIZE
- BELT
- RCH
- ▁PLEASANT
- ▁PATH
- ▁QUALITY
- ▁STINGS
- ▁REPAIRING
- ▁DELAY
- ▁RIDES
- ▁ELSA
- ▁SECURITY
- ▁TWENTIETH
- ▁PC
- AH
- ▁NOTES
- RAL
- ▁NORMAL
- ▁DIRECT
- ▁CENT
- ▁APOLOGY
- ▁GARBAGE
- ▁GEE
- ▁WATCHES
- ▁SCISSOR
- ▁CULT
- ▁ECONOMY
- ▁SEASHELL
- ▁HA
- ▁HORSES
- ▁WHEELS
- BYE
- ▁HABIT
- ▁VI
- OOKIE
- ▁BAKING
- ▁CHERISH
- ▁JESUS
- ▁KLEA
- ▁PARTICIPATE
- ▁NICER
- ▁LISTING
- ▁SUPP
- IELD
- ▁CRISPY
- ▁EYESIGHT
- ▁TWITCH
- ▁WORST
- ▁GREETING
- ▁DRYER
- ▁LINES
- ▁DEPRESSED
- RENT
- ▁ROLLS
- LAND
- ▁DOCUMENT
- ▁COCKROACH
- ▁TAX
- ▁LIBER
- ▁FRIGHT
- ▁GARDENVIEW
- ▁JAR
- ▁ONESELF
- ▁PELICAN
- ▁RUSH
- ▁BAKER
- ▁EXPLODED
- ▁CARNATIONS
- ▁BUBBLES
- ▁BREAKS
- ▁EUROPE
- ▁EXCHANGE
- ▁SMASH
- ▁TORONTO
- ▁CEO
- ▁BLEEDING
- ▁IMAGINED
- ▁KIL
- ▁POU
- ▁TAB
- ▁CRUS
- OGRAMS
- ▁ALASKA
- ▁FROWNED
- MAIL
- TWINKL
- ▁SINGLE
- ▁INVENT
- ▁ROD
- ▁EMERGENCY
- PORTER
- ▁COMB
- ▁HUG
- TI
- '...'
- SMITH
- ▁AVOID
- ▁JJAKKUNG
- ▁MATERIALS
- ▁LOSES
- ▁LU
- INA
- FREE
- ▁SERV
- ▁FLU
- ▁REEL
- ▁BACKPACK
- ▁REPRINT
- ▁SIXTEEN
- ▁ZENA
- ROL
- ▁AWARD
- ▁TENK
- ▁NETWORK
- ▁WORKER
- ▁REDUCE
- GUE
- ▁PROTECT
- ▁CONCERN
- ▁CRIMINAL
- ▁FIREFIGHTER
- ▁INCHEON
- ▁SUWON
- ▁VIEWER
- OVER
- ▁ELEVATORS
- OR
- ▁IMPRESSED
- ▁SHAME
- ▁STRAP
- ▁YIELD
- ▁WARNED
- ▁HANDOUT
- ▁LUNCHTIME
- URY
- IED
- AY
- WIFE
- GUN
- ▁ISSUE
- RRIE
- ▁SANDCASTLE
- ▁FIGURES
- ▁LOV
- ▁POKE
- ▁FREESTYLE
- ▁CHAIN
- ▁EVERYDAY
- OK
- ALY
- ▁RATING
- ▁SPIT
- ▁SAIL
- ▁AMBULANCE
- ▁ENORMOUS
- ▁SELFCONT
- ▁MEMORIZED
- ▁GIRAFFES
- ▁SNOWS
- ▁PLANTS
- ▁LEAD
- ▁EXHIBITION
- ▁FOUGHT
- ▁MARBLE
- 'YES'
- ▁PICKE
- ▁WRONGLY
- ▁HURR
- ▁CONVERSATION
- ▁DETAIL
- ▁WORRYING
- ▁SAVING
- ▁TU
- ▁SECRETLY
- AWAY
- ▁GROWS
- ▁CONTRA
- ▁SCRAMBLE
- BES
- ▁PROMISES
- ▁CHAIRS
- ▁GOGGLES
- ▁OTHERWISE
- ▁VICTOR
- ▁THORNS
- ▁WORTHWHILE
- ▁HIPPOS
- ▁TRICK
- ▁OBSERVATORY
- ▁SHAMPOO
- ▁COKE
- ▁DRAMA
- ▁DELAYED
- ▁GUTS
- ▁AZALEA
- ▁WRAPP
- TIE
- HEAD
- ▁BIGGEST
- ▁ENEMIES
- ▁PUMPKIN
- ▁DOCUMENTARY
- ▁ATOPY
- ▁COUGH
- ▁TOUCHED
- ▁AWARDS
- EWER
- VER
- ▁BEARS
- ▁CACTUS
- ▁LOCK
- ▁LIT
- ▁SKETCH
- ZEN
- ▁DRAGG
- ▁SQUEEZED
- ▁SCOT
- SHY
- ▁CALCULAT
- ▁APPEARED
- ▁RAINED
- ▁WINGS
- ▁CLOTH
- ▁DIG
- ▁DONGSENG
- ▁SPONGE
- ▁STUBBORN
- ▁WAIST
- ▁FLE
- ▁TAG
- CH
- ▁CR
- ▁UMBRELLAS
- ▁TOOTHBRUSH
- ▁POCKETS
- ▁PAJAMA
- ▁HALLA
- ▁GATHER
- ▁BOSS
- ▁DETERGENT
- ▁DOCUMENTS
- ▁GENEROUS
- ▁TOTAL
- ▁CURTAIN
- ▁PUDD
- ▁THICK
- NSIBLE
- ▁HOLIDAYS
- ▁TICKLES
- FLAVORED
- ▁COVID
- ▁GIFTWRAP
- ▁BLINKING
- ▁JUNG
- HOK
- LEANING
- ▁IDOLS
- ▁DRO
- ▁FOUNTAIN
- ▁PHYSIC
- ▁PRESCRIPTION
- ▁LATTE
- ▁TONGUE
- ▁NA
- WORLD
- ▁SURGERY
- ADLINE
- ▁STUFFY
- ▁WAFFLES
- ▁15
- ▁LOGO
- ▁SHORTCUTS
- ▁RESPECTED
- ▁INVENTIONS
- ▁ARTISTS
- RAFFI
- ▁FOSSIL
- ▁GOLDCREST
- ▁MALTESE
- UGGING
- ▁BUCKWHEAT
- ▁PROFESS
- ▁SQUID
- ▁CORRECTION
- IT
- LOOKING
- ▁GENIUS
- ▁WHALES
- ▁OPPA
- ▁DONKEYS
- ▁ELECTRIC
- ▁FAKE
- ▁JUNIOR
- ▁MEDAL
- ▁SONGPYEON
- ▁MO
- ▁LOCKED
- ▁MEMORIZE
- ▁DIZZY
- ▁CAMELS
- ▁Y
- ▁CARING
- ▁PERFORMANCE
- ▁ERRAND
- ▁STRIPE
- ▁SIL
- ▁REDESIGNED
- ▁TIPS
- SCRIPT
- ▁BISCUIT
- ▁TORN
- ▁BRUSHE
- ▁STREETS
- ▁RELIEVED
- ▁HOPS
- ESSER
- ▁INSTRUMENT
- ▁ADVANCE
- ▁GESTURE
- ▁MUGWORT
- ▁PROMOT
- ▁PIN
- ▁SHAD
- IONAL
- '72'
- ▁HEAVEN
- ▁SLOPE
- ▁HAIRDR
- YOU
- ▁OWNERS
- ▁PLANS
- ▁SUNFLOWERS
- ▁CHIMNEY
- ▁HIPHOP
- ▁FOURTH
- ▁C
- ▁COUNTS
- ▁BARK
- SCOPE
- ▁ATOPIC
- ▁DEATH
- ▁FORMALLY
- ▁TWIN
- ▁QUIETLY
- ▁TEAS
- ▁MIN
- ▁CE
- ▁DEPENDS
- ▁TRANSFERRED
- ▁HANDY
- ▁CLEARLY
- CHOCO
- ▁HOTDOGS
- ▁FROWN
- ▁RUB
- ▁PERFORM
- ▁ATTRACT
- ▁DUST
- ▁REVIEW
- ▁SIGNBOARD
- ▁ENDURE
- ▁RIDD
- CKED
- ▁CIRCLES
- ▁AIRPLANES
- ▁MI
- GING
- Q
- ▁YURI
- ▁30
- ▁OFFICERS
- ▁ALMONDS
- ▁SOLVED
- ▁WEREN
- ▁ALBUM
- ▁UNDERGROUND
- ▁WRINKLES
- IL
- ▁TALES
- SOKCHO
- ▁GROCERIES
- ▁RECEIV
- ▁BARE
- ▁PEEL
- ▁COCKROACHES
- ▁DEEPLY
- ▁STATIONS
- ▁DANCED
- ▁CHUBBY
- ▁SATURDAYS
- ▁WING
- ▁CRAFTSMAN
- ▁OCCASION
- ▁WINE
- ▁TELE
- ▁BLUETOOTH
- ▁DISAPPEARED
- ▁SUBM
- ▁FARTED
- ▁PREPARED
- LIST
- ▁CONDITION
- ▁PORTRAIT
- '23'
- ▁POINTS
- ▁TAMBOURINES
- ▁TEND
- ▁SELFISH
- ▁SUBJECT
- RUPTE
- ▁LICKING
- ▁WATERMELONS
- ▁DIES
- ▁BLOWING
- ▁SOIL
- NIFE
- ▁BLAND
- ▁RECYCLED
- ▁SIXTY
- ▁LENGTH
- ILING
- ▁SURVIVED
- ▁HABITS
- WANT
- ▁GRAND
- ▁SAVORY
- ▁APPLAUSE
- ▁APPLY
- ▁MEANER
- ▁DISEASES
- ▁FRUSTRATING
- ▁NOTIFICATION
- ▁CHEOMSEONGDAE
- ▁BADGE
- ▁ABOARD
- ▁DISNEYLAND
- ▁LEE
- ▁SHARPEN
- ▁KETTLES
- ▁HERESY
- ▁CRAM
- ▁BRONZE
- ▁HARSH
- ▁EBS
- ▁GREY
- ▁POSE
- ▁PICKLES
- ▁LEN
- ▁TIGERS
- ARY
- ▁CLAR
- ▁EDUCATION
- ▁NEIGH
- ▁ADDITION
- ▁REASONABLE
- ▁DUMPING
- ▁SPACES
- ▁LIGHTER
- ▁SPELLING
- Z
- ▁CATCHING
- ▁LEVEL
- ▁UPSTAIRS
- ▁RINK
- ▁HANDLE
- AVING
- ▁BOWED
- ▁BEAUTIFULLY
- ▁FARTS
- ▁BOLT
- ▁FAMILIAR
- BBLE
- DO
- ▁FILE
- ▁TREATMENT
- ▁PASTOR
- ▁EEK
- ▁BLOOMING
- CIAL
- TRAINED
- ▁APPEAR
- ▁KNEE
- ▁WHEEL
- RIAN
- ▁ATTEND
- ▁CONFESS
- ▁DVD
- ▁WITNESS
- ▁BATMAN
- ID
- ▁BANGS
- ▁YARD
- ▁LOTION
- ▁RECYCLE
- ▁PRI
- ▁BURDEN
- ▁SCRA
- ▁VEGETA
- ▁TOENAILS
- SUALLY
- ▁YAM
- FORD
- ▁FORMAL
- ▁POK
- ▁FROZE
- ▁MULTIPLICATION
- ▁SEJONG
- ▁TRIES
- ▁SUNSHINE
- ▁HERBS
- ▁STRIPES
- ▁CLIMBING
- ▁SKIPP
- FFE
- ▁DAMAGE
- ▁RIDICULOUS
- ▁QUACK
- ▁PINNOCHIO
- SIDE
- ▁STANDARD
- ▁TRADITION
- GIANT
- ▁YELL
- ▁SUPER
- ▁OVERREACT
- ▁PERFUME
- ▁UNDERCOOK
- BEC
- ▁MAPS
- ▁PARTNERS
- ▁SPINACH
- ▁TTEOKGUK
- ▁JAJANGMYEON
- ▁DIRECTLY
- VATE
- STEE
- ▁MOUSES
- ▁SNOWED
- ▁IGNORE
- GIFT
- ▁LOCKER
- ▁SURVIV
- ▁P
- BBLES
- DAIRY
- ▁TOOLS
- STAR
- LING
- ▁BB
- ▁ACCESSORIES
- ▁NINTENDO
- ▁BIBIMBAP
- ▁DERMATITIS
- ▁ANNOUNCED
- ▁LICK
- ▁AZALEAS
- ▁PEPPER
- VAS
- ▁BODIES
- ▁EXPAND
- PED
- FLOWING
- ▁MIXED
- ▁GROUP
- ▁SAUSAGE
- ▁CEREAL
- ▁EASIEST
- ▁OVERSLEEP
- ▁SATISF
- ▁150
- ▁BAY
- ▁DIP
- UN
- AK
- ▁COINS
- ▁SURPRISES
- ▁WAK
- OL
- ▁EVILDOING
- ▁EYEBROWS
- ▁HEADBAND
- ▁KETCHUP
- ▁PROPERLY
- ▁STRAWBERRIES
- ▁UNFORTUNATE
- ITY
- LIKE
- ONG
- ▁WISHES
- ▁CONSTRUCTION
- ▁RESEARCH
- ▁RIPPED
- ▁FOREIGNERS
- ▁SANDALS
- ▁GOLDEN
- ▁PERFORMANCES
- ▁STEALING
- HA
- ▁SPARE
- ▁KPOP
- ▁LEASH
- ▁TIGHTLY
- CM
- ▁COMME
- ▁500
- ▁ANCHOVIES
- ▁BANKBOOK
- ▁COVIDNINETEEN
- ▁DEFINIT
- ▁UPRIGHT
- ▁MISSION
- BAL
- PHONES
- HO
- ▁GENERAL
- ▁OVEN
- ▁MARCH
- V
- HU
- ▁GROWN
- ▁BROADCAST
- ▁GANGWONDO
- ▁REFRESHING
- ▁DICE
- ▁RACK
- ▁PERM
- ▁SUITCASES
- ▁16
- ▁ENVELOPE
- ▁HOOKED
- ▁ROOT
- ▁TEXT
- ▁CAGE
- GO
- ▁MUS
- ▁DOUGHNUTS
- ▁WASTING
- ▁BETIAN
- ▁PRESENTING
- ▁BRUISE
- ▁ALOUD
- ▁AUDITORIUM
- ▁BTS
- PLE
- RAISED
- MOTION
- ▁GENTLE
- ONIA
- ▁EASIER
- ▁FONDUE
- ▁SEASICK
- ▁VR
- ▁DOLPHINS
- ▁MATCHES
- UR
- ACHE
- ▁CICADAS
- ▁LEAN
- ▁REPORTS
- YING
- ▁CLOUDS
- ▁WOLVES
- ▁HEEL
- ▁FRESHMAN
- ▁SCREAMED
- ▁RELATIVE
- ARIN
- ▁BUR
- ▁PASTE
- ▁FRIENDLY
- ABLE
- ▁VISITING
- ▁INVIT
- ▁LOUDSPEAKERS
- ▁NNN
- ▁OINTMENT
- ▁SWAN
- CLES
- ▁GARDENING
- ▁HICCUP
- IM
- '0'
- ND
- BA
- ▁JULY
- ▁SEMESTER
- ▁SUSHI
- ▁UNIVERSE
- ▁TOSUN
- ▁PILLS
- ▁TAN
- ▁NEAT
- ▁FEATHER
- ▁ANNEX
- ▁PENGO
- ▁SICKNESS
- ▁CANDLES
- LO
- ▁SCRUB
- ▁SHOOT
- ▁TH
- ▁CRACK
- PLAIN
- ▁FRIDGE
- ▁ANSWERING
- ▁INDOORS
- ▁APOLOGIZED
- ▁COMEDIANS
- ▁WOR
- ▁SPIN
- ▁DRACULA
- ▁DRAGONFLIES
- ▁EXTINGUISHER
- ▁GRADUATION
- ▁LADIES
- ▁EX
- ▁PLANNED
- ▁50
- ▁MILLIONS
- ▁TANGERINES
- ▁DRAWN
- ▁CLEANER
- ▁DECORATIONS
- ▁SPI
- ▁VARI
- ▁DRAGONFLY
- ▁SCENT
- ▁GAYAGEUM
- ▁CL
- ▁MONTHS
- ▁PAJAMAS
- ▁RESTING
- ISE
- ▁BADGES
- WORK
- KY
- ▁ADORES
- ▁COLA
- ▁MOTOR
- ▁PRODUCE
- ▁THOROUGHLY
- ▁VOWELS
- ▁COMMON
- PING
- ▁SUNFLOWER
- ▁FOLDING
- ▁DECORAT
- '8'
- ▁SCREAM
- ▁CONNECT
- ▁AUGUST
- ▁PURPOSE
- ▁PIAN
- ▁CHIMNEYS
- ▁MONDAYS
- JU
- ▁BEETLE
- ▁PEED
- ▁INTEREST
- ▁BAN
- ▁SNOR
- ▁MA
- ▁SEW
- ▁COIN
- ▁HAN
- ▁ALPHABETS
- ▁TONKATSU
- ▁HOPEFULLY
- ▁ICECREAM
- ▁REGULARLY
- ▁GALBI
- ▁CHAS
- ▁REALIZE
- ▁WORKERS
- ▁BOATS
- ▁INTERRUPT
- ▁SUBTRACT
- ▁ORGANIZING
- ▁HISTORIC
- ▁POTTER
- ATION
- ▁CHARGER
- ▁BAL
- ▁SUNLIGHT
- ▁DYE
- ▁SHOELACES
- ▁EVENLY
- RY
- '30'
- BIKE
- ▁CRAWL
- ▁CHOOS
- ▁ROBBINS
- ▁SHOOK
- ▁SPLASH
- ASKIN
- ▁UNTIE
- YMP
- ▁STING
- IOUS
- ▁PA
- ▁CAROLS
- ▁SUDDEN
- ▁MACKEREL
- ▁NOSEBLEED
- ▁SCREW
- ▁HANOK
- TOMS
- ▁STRA
- DAY
- ▁RIBBON
- MILKY
- BEAN
- ▁TOMATO
- ▁NATIONAL
- ▁SPRITE
- ▁PANIX
- ▁WISE
- ZED
- ▁CHEWING
- ▁FOOTS
- ▁SHAKES
- ADA
- 'NO'
- ▁DIFFERENTLY
- SLEEVE
- ▁930
- ▁GYEONGJU
- ▁RAPUNZEL
- ▁ROMANTIC
- ▁FARTHER
- ▁CAPE
- IER
- ETY
- ▁HARDEST
- ▁TURNING
- ▁3000
- GENEROUS
- ▁BOO
- ▁ATTENTION
- ▁DWARVES
- ▁HAKNYEON
- ▁OUTDOOR
- ▁RESORT
- ▁SWOLLEN
- ▁PINCH
- ▁PURE
- STER
- ▁GRAB
- ▁BIO
- ▁HURRICANE
- ▁JUDGE
- ▁LANE
- ▁OINK
- ▁SPRAINED
- ▁THIEVES
- ▁TRAPPED
- BIL
- ▁RANCH
- ▁TWENTYTH
- ▁ANNE
- OLD
- NIGHT
- ▁HEIGHTS
- ▁BRICK
- ▁GRATEFUL
- ▁VITAMIN
- ▁HAMSTER
- ▁USELESS
- ▁INVENTOR
- ▁ULSAN
- ▁PRETENDING
- ▁PANDAS
- GGING
- UL
- AG
- COMING
- ▁HUNT
- ▁REMOVE
- ▁OCTOBER
- ▁SEPARATE
- ▁YAWN
- ▁PALE
- ▁UM
- ▁FLOATING
- ▁CO
- HAVE
- ▁SNOWY
- ▁SHOELACE
- GRAPHY
- ▁MELT
- ▁FISHBONE
- UG
- ▁CHIL
- ▁POOPED
- ▁YUT
- ▁PILL
- '0000'
- ▁SURVIVE
- ▁EXAMIN
- ▁TRU
- ▁BACKGROUND
- ▁BEGINNING
- ▁MACARONS
- ▁SURFING
- ▁VERANDA
- ▁ASSEMBLE
- ▁HANGUL
- ▁REACTION
- ▁DAUGHTERS
- MENT
- QUET
- RMALLY
- ANG
- ▁LID
- ▁RESERVATION
- SOON
- ▁FLIP
- CAN
- ▁JUICY
- ▁KINGDOM
- ▁SOCIETY
- ▁TADPOLE
- ▁JAMSIL
- ▁WI
- ▁GRADUATED
- ▁PRE
- ▁SCRATCHING
- ▁PO
- ▁APPEARS
- ILY
- FAT
- FOOD
- ▁DISAPPEAR
- ▁FAINT
- ▁FLOAT
- ▁RUBB
- ▁TRANSFER
- ▁COMFORT
- ▁BALLERINA
- ▁DESCRIPTION
- ▁GENTLY
- ▁HAPPIER
- ▁RINGTONE
- ▁ARGUING
- ▁CONDITIONER
- PM
- IET
- CU
- ▁EARTHQUAKES
- ▁CHICK
- ▁TR
- ▁TYPHOON
- ▁BUNS
- ▁RUNNER
- NDC
- ▁WAH
- ▁JELL
- ENDY
- ▁COMMU
- ▁FARMS
- ▁SLEEVES
- ▁BEETLES
- LOW
- ▁MEATBALL
- ALKIE
- ▁MAGNIF
- ▁CONNIE
- ▁NEIGHBOR
- ▁OPERA
- ▁PINOCCHIO
- ▁SHOEMAKER
- ▁CRAFT
- ▁ONESIX
- ▁FLOW
- WD
- HOO
- ▁PRESENTATIONS
- ▁CHIP
- ITE
- ▁ANIMAT
- ▁DUB
- ▁FLOOD
- ▁KAKAO
- ▁RESU
- ▁UNBELIEVABLE
- ▁GRIN
- ▁HEALTHIER
- ▁SIXTH
- ▁CHOSEN
- ▁LOSER
- ▁BLED
- REALLY
- ▁IGNOR
- ▁PRODUCT
- RIST
- ▁DISCOURAGED
- ▁DODGE
- ▁FORECAST
- ▁OWL
- ▁TREASURE
- ▁UNIFORM
- ▁LOCAT
- ▁TUBE
- DON
- ▁FOLDED
- ▁WEIGH
- ▁RUIN
- ▁CRUSH
- ▁PARAD
- ▁OBESE
- ▁ORGANIZE
- ▁PRINCIPAL
- ▁RATTLING
- ▁RESERVE
- ▁RHYM
- ▁SIP
- ▁UNDERWATER
- ▁TAEG
- ▁TRAVELLING
- ▁STACK
- ▁RI
- ▁BUNDLES
- YEAR
- SAME
- AND
- ▁CHEESECAKE
- ▁EPISODE
- ▁FAMILIES
- ▁FIFTH
- ▁RHINITIS
- ▁SAUNA
- NCHES
- ▁EXCE
- TIQUE
- ▁COMBO
- ▁STRINGS
- ▁COLORFUL
- ▁FLOWS
- ▁COOLEST
- ▁OPPAS
- ATING
- ATE
- ▁MELTS
- ▁CHOPSTICK
- ▁BRANCH
- ▁FRUSTRATED
- ▁GREASY
- ▁EXIST
- ▁WAVING
- ▁APP
- ▁SODA
- ▁FALLEN
- ▁PRO
- SHAPED
- NG
- ▁CONNECTED
- ▁12
- ▁BANDAID
- ▁DISTANCE
- ▁DRAIN
- ▁MEASURE
- ▁TEMPLE
- ▁WORKBOOK
- ▁EIGHTAM
- ▁WARN
- ▁BURNT
- BOARD
- ▁DE
- IFF
- RTH
- ▁MUSHROOMS
- ▁POWERFUL
- STICK
- ▁VOUCHERS
- ▁BLEED
- ▁BRAID
- ▁CREPE
- ▁HAWKING
- ▁FLAM
- ▁SCORE
- ▁RELEASED
- ▁TICKLED
- BU
- FISH
- ATIVE
- CLUSI
- ▁CLINIC
- ▁CROOKED
- ▁RELAY
- ▁SCOOTER
- ▁SEBASTIAN
- ▁SUFFER
- ▁TEENAGER
- ▁BATHHOUSE
- ▁WRIST
- ▁BAKERIES
- ▁BRANCHES
- ▁SAMYUKGU
- ▁SCU
- ENDER
- ▁INGREDIENTS
- ▁INVENTED
- ▁BOWING
- SSES
- WAR
- ▁PRESSED
- ▁SQUEEZ
- SIGNED
- WON
- ▁70
- ▁APPROACH
- ▁CHAPPED
- ▁DUMB
- ▁FREEZING
- ▁MAGNIFIER
- ENTIAL
- IE
- ▁CLOSELY
- ▁DIAPERS
- OUS
- ▁DIRT
- ▁CENTIMETER
- ▁FLOWERPOT
- ▁FOAM
- ▁POLITIC
- ▁PORRIDGE
- ▁PEDIATRICIAN
- ▁FIREWORKS
- ▁TROUBLEMAKER
- ▁PILLAR
- ▁EVACUATE
- ▁SILLA
- EUK
- ANDING
- ▁FAINTED
- ERMAN
- ▁SEAGULL
- ▁CHICKS
- ▁SWEATING
- INGO
- PAPER
- ▁AGREED
- ▁CLAPP
- VA
- ▁STRENGTH
- SOONGSIL
- ‘
- ▁CONVENIENT
- ▁DECEMBER
- ▁FORTUNATELY
- ▁FURNITURE
- ▁HAGWON
- ▁LOUNGE
- ▁MOKDONG
- ▁PALM
- ▁SPRINKLE
- ▁STIRFR
- RUNK
- ▁ANKLE
- ▁SELF
- ▁SEVENTH
- LESS
- ▁DIVING
- ADE
- ▁RANG
- SHINY
- WITH
- ▁BRAVELY
- ▁BADMINTON
- ▁BULGUKSA
- ▁KARAOKE
- ▁ADMIT
- ▁GINGER
- ▁LAID
- ▁SNOWBOARD
- ▁HOPPING
- ▁UDO
- ▁BULGING
- ▁CARP
- ▁FACT
- ▁GROUPS
- ▁ENTERING
- ▁RIP
- ▁MAR
- LOCK
- ▁JE
- ▁ADMISSION
- ▁CHRYSANTHEMUM
- ▁DIARIES
- ▁DISPOSABLE
- ▁LOACH
- ▁PARROT
- ▁SCULPTURE
- ▁TERRIF
- ▁VOLUME
- ▁REPRESENTATIVE
- ▁MEOW
- ▁CHEEK
- ▁JEJUDO
- ▁HARMFUL
- ▁BRUISED
- ▁MINERAL
- AINT
- ▁EDIT
- WARDS
- HY
- ▁VIEW
- ▁EXACT
- ROUGHT
- OCKPAPERSCISSORS
- ▁CHESTNUT
- ▁HAWAII
- ▁PIMPLES
- ▁REMOTE
- ▁SOLUTION
- ▁COMPETE
- ▁SOFTLY
- ▁BUNDLE
- ▁LIP
- ▁GRADER
- WOO
- RIS
- STORY
- DAYS
- COLORED
- FOR
- ▁COLLAPSE
- ▁STEPP
- ▁BRILL
- RSELVES
- ▁ACCORDING
- ▁BACON
- ▁BAEK
- ▁BUTTERFLIES
- ▁COSMOS
- ▁CYCLING
- ▁DISTRICT
- ▁ESTATE
- ▁HUMID
- ▁MERMAID
- ▁PAPRIKA
- ▁PHONICS
- ▁BELONG
- ▁YUKJANG
- ▁ANIMATION
- ▁FLIPP
- ▁DUMPLING
- ▁BLOSSOM
- UNG
- ▁EXPLORE
- ▁INSECTS
- ▁JI
- HEART
- GHTS
- ▁ASTRONAUT
- ▁BELLHAMMER
- ▁LICENSE
- ▁NEPTUNE
- ▁OPPOS
- ▁REFRIGERATOR
- ▁STONEBUSH
- ▁1000
- ▁APPLI
- ▁SUBTRACTION
- ▁HOOD
- ▁WIDER
- ▁BROOM
- ▁UNIVERSITY
- ▁PRINCESSES
- ▁MINT
- ▁PARENT
- ▁PEEING
- ▁ADORE
- DONG
- ▁SP
- ANCE
- ▁EXPLOR
- TTEOKBOKKI
- WHEEL
- ▁ABANDONED
- ▁CALLUSES
- ▁COSMETICS
- ▁LADYBUG
- ▁MARIA
- ▁PRONUNCIATION
- ▁BOUQUET
- ▁SOGGY
- ▁LEFTOVERS
- ▁MIKE
- ▁TANK
- ▁SPAC
- ▁FRAME
- MADE
- IVAL
- ▁YE
- ▁GATHERING
- IAN
- ▁KITTENS
- IBLE
- ▁ABBREVIAT
- ▁CHAPAGETTI
- ▁ENGINES
- ▁EQUIPMENT
- ▁INTERSECTION
- ▁SANITIZER
- ▁DOKDO
- ▁GENERATOR
- ▁MEDIUM
- ▁BALANCE
- ▁CHART
- ▁TELEVISION
- ▁JAJANG
- ▁LOLLY
- ▁PHOTOGRAPH
- ORD
- ▁KKA
- ▁SOLES
- ▁BALM
- ▁DECORATION
- ▁THORN
- ▁ARMY
- ▁YU
- EEK
- NK
- BOY
- LENGTH
- TONY
- HEN
- ▁RELEASE
- ▁LOOSE
- ▁COMPLETE
- KYOCHON
- ▁ARCADE
- ▁BRIM
- ▁CORONA
- ▁CRANE
- ▁CUPCAKE
- ▁KITCHENWARE
- ▁LULLABY
- ▁MODER
- ▁MUSKET
- ▁OBEDIEN
- ▁PIKACHU
- ▁PROVERBS
- ▁SALMON
- ▁YUKGAEJANG
- ▁TANNED
- ▁VILLA
- ▁DIRECTIONS
- ▁CLAY
- ▁ADMIR
- ▁DIRECTOR
- ▁DAMAGED
- ▁BURST
- ▁TOPIC
- ▁DOODLED
- ▁COMPAR
- ▁BUBBLE
- ▁HO
- ▁KISSE
- ▁JO
- ▁BLOATED
- ▁CONSONANTS
- ▁DOWNLOAD
- ▁ELBOW
- ▁FUNNIEST
- ▁PORORO
- ▁SLOTS
- ▁VACUUM
- ▁BOTTOM
- ▁MANDELA
- ▁IMSIL
- ▁VIP
- ▁TOMMY
- EATURE
- ▁PINE
- ▁EIGHTTHIRTY
- ▁HIDEANDSEEK
- ▁COLLAPSED
- ▁UNDERSTOOD
- ▁CRUSHED
- ▁TRI
- OF
- ▁DI
- ▁CARNATION
- ORY
- NAILS
- LENT
- ▁PUBLISH
- PLACE
- ▁CLIP
- ILLA
- ▁SUNSHIN
- ▁ACTUAL
- ▁SUCCESS
- COCK
- ▁60
- ▁BENEFITS
- ▁CLAW
- ▁HAUNT
- ▁LIBRARIES
- ▁LOTTERIA
- ▁MERCURY
- ▁MITTEN
- ▁SWAM
- ▁ROTTEN
- ▁SERVANT
- DENTAL
- ▁LEGEND
- ▁ROT
- ▁PRICKED
- ▁230
- ▁TUB
- ▁WINK
- ▁HUNTER
- ▁SCREAMING
- ▁FINALE
- ▁SOAPY
- ▁REDESIGNING
- NNA
- ▁DIAPER
- ▁BANG
- IK
- CHAN
- TIER
- ▁MOR
- ▁METERS
- ▁HUGG
- DAE
- FTER
- CHO
- SHIP
- EITHER
- CTIVE
- ▁KI
- ▁RU
- ▁BRAND
- ▁AMOUNT
- ▁EXPLANATION
- ▁HAIRPIN
- ▁HORRIBLE
- ▁INTERIOR
- ▁LANDSLIDE
- ▁NEVERTHELESS
- ▁PERSIMMON
- ▁POSTPONE
- ▁SCIENTIST
- ▁SLACK
- ▁STORM
- ▁STREAM
- ▁SURPRISING
- ▁URGENT
- ▁ZOMBIE
- ▁STOOL
- ▁LOAD
- NAMBU
- ▁ANNOUNCEMENT
- IKES
- GRAN
- ▁ABC
- ▁COMPLE
- ▁FASCINATING
- ▁REMOVED
- ▁CRAWLING
- ▁INTERRUPTING
- RELLA
- RAGE
- ▁PEELING
- ▁HUMANS
- ▁MON
- ▁BEGIN
- ▁VEGETABLE
- ▁SLEEVE
- GLE
- ▁THA
- ISH
- TRAINER
- '7'
- ROAD
- DRIVER
- ▁PRETEN
- ▁ALLOW
- UZZLE
- ▁DEMONSTRAT
- ▁STIR
- ▁BROC
- ▁CARCASON
- ▁EQUALLY
- ▁EXPERIMENT
- ▁HESITAT
- ▁SPINNING
- ▁MENTOR
- ▁ABBREVIATION
- ▁RASHES
- ▁ASSEMBLING
- ▁DUNG
- MEMOR
- ▁PEACEFUL
- ▁HARDENS
- OSU
- SSUED
- ▁FRECKLE
- TIOUS
- ▁REALIZ
- ▁SQUA
- LIFE
- THINK
- ▁BIK
- ▁KNIT
- ZZA
- ▁ALITTLE
- ▁BAREFOOT
- ▁CONCENTRATE
- ▁DALGONA
- ▁GUIDEBOOK
- ▁KIDZANIA
- ▁PALACE
- ▁ROSHEN
- ▁TEXTBOOK
- ▁TUNAKIMBAP
- OTTEOK
- ▁830
- ▁HOSE
- ITIES
- NIX
- ▁FIFTEENCM
- ▁IMAGE
- ▁CHEESEKIMBAP
- ▁HOTTER
- ▁PATT
- ▁CLIPPE
- ▁FOXES
- EAGLE
- ▁QUE
- NDING
- ▁DETER
- AP
- YEO
- UED
- ▁PAI
- ▁EXCITEDLY
- ▁WAVED
- ▁BUL
- BUT
- ▁METER
- KIMBAP
- HAND
- WATCHING
- ▁CONVERS
- ▁FLICK
- ▁PEDIATRIC
- NAMENT
- REIGN
- ▁BIKINI
- ▁BUCKWHEATCREPE
- ▁JENGA
- ▁LAUNCH
- ▁OPTICIAN
- ▁PIGTAIL
- ▁SIMON
- ▁SUBSCRIBE
- ▁TICKLISH
- NELS
- ▁PINWHEEL
- INATED
- ▁DRUG
- ▁ONESIXCM
- ▁EIGHTH
- ▁SMARTEST
- ▁HUNTING
- ▁PIL
- UMMY
- ITION
- UNNI
- ▁SU
- ▁POWERFULL
- ▁WAFFLE
- DIA
- ▁TICK
- EIGHT
- PICKED
- FIFTY
- WENT
- ▁BOT
- ▁REPRESENT
- OKKI
- ▁COCOA
- ▁CUSHION
- ▁FARTHEST
- ▁PENTAGON
- ▁SLIDING
- ▁SWEAR
- ▁MOLD
- ▁BBOY
- ▁80
- ▁WATERPROOF
- ▁RAIL
- ▁CREATED
- ▁CHIRPING
- ▁SEARCH
- SEOK
- ▁TOAST
- ▁BETRAYE
- JOR
- ▁NI
- ZI
- ▁SLAMM
- ▁GU
- ▁NAG
- ▁SERVED
- UFFY
- ▁INSECT
- ▁ZIPPE
- LP
- YEONG
- ESSION
- IPPED
- ▁CELEBRAT
- ▁CHANG
- '50'
- POST
- ENTI
- ▁DISAPPOINT
- ▁QU
- ▁FOREIGN
- ▁POSSIB
- ▁CONGRATULAT
- ADOW
- ▁TAE
- CAFÉ
- ▁COURIER
- ▁DAEJEON
- ▁DOWNSTAIRS
- ▁EXPER
- ▁PREFERENCE
- ▁LACT
- ▁OCCUR
- ORIENT
- ▁SPACIOUS
- INARY
- ▁KNITTING
- ▁LIBERTY
- VILLE
- RB
- ▁BARKED
- DAN
- ▁TIN
- ATOR
- ▁PHO
- RIED
- ▁JINDA
- OUND
- HOE
- ▁STRETCHE
- ▁SNEEZ
- EVI
- QUALITY
- MOM
- ▁BLIND
- HYEON
- ECTION
- ROKE
- ▁ANCHOVY
- ▁ASHAMED
- ▁COASTER
- ▁CONFUSING
- ▁CYCLIST
- ▁DANDELION
- ▁FIREFLIES
- ▁HYUNG
- ▁KNOWLEDGE
- ▁NARACULA
- ▁SCAB
- ▁VOCABULARY
- ▁CONFIDENT
- ▁RELAT
- ▁FOOLISH
- ▁NINEAM
- ▁ZO
- ▁BOU
- ▁FLATTERED
- ▁BLINDING
- ▁SKATER
- ▁ROLLER
- ▁FIRM
- COTT
- NURI
- ▁WARMER
- ▁LONGEST
- ▁TICKLE
- ▁AMERICAN
- GI
- AGGED
- CHARGE
- TODAY
- ▁CREATE
- UMPING
- JJAEK
- ▁BEGINNER
- ▁CLICKING
- ▁CORRIDORS
- ▁DAZZLING
- ▁DERMATOLOGIST
- ▁DILIGENT
- ▁FEBRUARY
- ▁FISHBOWL
- ▁GARAETTEOK
- ▁GARGLE
- ▁INJURED
- ▁MANTISES
- ▁NAKSEONGDAE
- ▁ROAST
- ▁SNITCH
- ▁SLIMMER
- ▁DISCHARGE
- ▁SOAKED
- ▁SELECTED
- ▁VICE
- ▁INFECT
- ▁CONTAINER
- ▁NEATLY
- ▁STARSHAPED
- LOTTEWORLD
- ▁SUPPLEMENT
- ▁EIGHTTH
- ISTERS
- ▁TICKL
- ▁STRAIGHTEN
- ▁SKINN
- RANGE
- ▁TANGERINE
- ▁STO
- PREPARED
- SPROUT
- TWELVE
- TONIGHT
- ▁RECOGNI
- VAN
- BEEN
- ▁EXPLODE
- ▁CHUBB
- ANGGU
- ▁SAVI
- ▁950
- ▁ADJUST
- ▁CASTANETS
- ▁FAITH
- ▁GONGJU
- ▁GRAIN
- ▁GROSS
- ▁JUPITER
- ▁MAGPIE
- ▁SAIPAN
- ▁SKULL
- ▁SPARROW
- ▁VACCINATED
- ▁VIGOROUSLY
- ▁AUTOMATIC
- ▁NEARBY
- SEVENTEEN
- ▁TWENTI
- ▁NIKE
- ▁SEORA
- DATORS
- ▁PONG
- ▁730
- ▁SCARIER
- ▁TRUNK
- ▁BETRAYER
- ▁CHEESEGIMBAP
- ONGDAE
- ▁SEVERE
- ▁SPOONFUL
- CTATION
- ▁WITCH
- ▁LIMIT
- ▁EATTTEOKBOKKI
- GEOUS
- ▁CRAWLED
- ▁SUC
- AVED
- AGE
- ▁KITTEN
- ▁SKEWER
- IZED
- ▁TEAR
- WAVE
- ▁RACI
- ▁CONTAIN
- ▁TRO
- ▁GUGUDAN
- ▁GEPPET
- ▁PHARMACI
- MULGUK
- PPAK
- SAMJANG
- ▁ACORN
- ▁APPETITE
- ▁BRUNCH
- ▁BUMMER
- ▁DIARRHEA
- ▁FLAP
- ▁GERMS
- ▁GWANSUN
- ▁HOMETOWN
- ▁KILOMETERS
- ▁MARRIAGE
- ▁PRANKS
- ▁RADISH
- '5'
- ′
- 수
- '2'
- ́
- 子
- 예
- 요
- '3'
- É
- '6'
- '9'
- “
- .
- '1'
- 단
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/ko_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ko_bpe5000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: contextual_block_transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
cherrue/RandomCrop_Rescale_epoch_3_learning_rate_5e_5_decay_0_01 | cherrue | 2023-07-06T06:30:06Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T05:35:06Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cherrue/pricetag_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cherrue/pricetag_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0546
- Validation Loss: 1.2226
- Train Accuracy: 0.3846
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1251, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3379 | 1.2276 | 0.5128 | 0 |
| 1.1973 | 1.1561 | 0.4615 | 1 |
| 1.0546 | 1.2226 | 0.3846 | 2 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sukritiverma/thumbs-up-tom_cruise | sukritiverma | 2023-07-06T06:14:17Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-05T23:31:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - sukritiverma/thumbs-up-tom_cruise
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
saintzeno/a2c-PandaReachDense-v3 | saintzeno | 2023-07-06T06:10:45Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T05:52:59Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aroot/eng-guj-simcse_central | aroot | 2023-07-06T05:52:24Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T05:29:33Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_central
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_central
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2829
- Bleu: 2.7255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/nkbllcfrmgtvrvcv2275pchsnltrx | nolanaatama | 2023-07-06T05:50:38Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:46:52Z | ---
license: creativeml-openrail-m
---
|
Ryukijano/whisper-small-dv | Ryukijano | 2023-07-06T05:36:17Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T06:25:50Z | ---
license: mit
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
---
---
# Whisper Small DV Model

## Model Description
The `whisper-small-dv` model is an advanced Automatic Speech Recognition (ASR) model, trained on the extensive [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/en/datasets) dataset. This model is capable of transcribing spoken language into written text with high accuracy, making it a valuable tool for a wide range of applications, from transcription services to voice assistants.
## Training
The model was trained using the PyTorch framework and the Transformers library. Training metrics and visualizations can be viewed on TensorBoard.
## Performance
The model's performance was evaluated on a held-out test set. The evaluation metrics and results can be found in the "Eval Results" section.
## Usage
The model can be used for any ASR task. To use the model, you can load it using the Transformers library:
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the model
model = Wav2Vec2ForCTC.from_pretrained("Ryukijano/whisper-small-dv")
processor = Wav2Vec2Processor.from_pretrained("Ryukijano/whisper-small-dv")
# Use the model for ASR
inputs = processor("path_to_audio_file", return_tensors="pt", padding=True)
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
```
## License
This model is released under the MIT license.
---
P |
eigenscribe/etzHayim | eigenscribe | 2023-07-06T05:34:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:33:49Z | ---
license: creativeml-openrail-m
---
|
insub/distilbert-base-uncased-finetuned-imdb | insub | 2023-07-06T05:22:05Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T05:17:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tuanio/WhisperCTC | tuanio | 2023-07-06T05:06:09Z | 0 | 1 | null | [
"summarization",
"dataset:mozilla-foundation/common_voice_13_0",
"arxiv:1910.09700",
"region:us"
] | summarization | 2023-07-06T04:55:16Z | ---
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
pipeline_tag: summarization
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
```python
class WhisperCTC(nn.Module):
def __init__(
self,
encoder_id: str = "tuanio/whisper-encoder.tiny.en",
dropout: float = 0.1,
vocab_size: int = 47,
):
super().__init__()
self.encoder = WhisperEncoder.from_pretrained(encoder_id)
print("Freezing Whisper Encoder...")
self.encoder._freeze_parameters()
print("Freezed!")
self.lm_head = nn.Sequential(
nn.SiLU(),
nn.Dropout(dropout),
nn.Linear(self.encoder.config.d_model, vocab_size),
)
nn.init.kaiming_uniform_(
self.lm_head[-1].weight, mode="fan_in", nonlinearity="relu"
)
def forward(self, feat: Tensor, attn_mask: Tensor):
enc = self.encoder(
input_features=feat, attention_mask=attn_mask
).last_hidden_state
logits = self.lm_head(enc)
log_probs = nn.functional.log_softmax(logits, dim=-1)
return log_probs
```
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
- IndictTTS: https://www.kaggle.com/datasets/tuannguyenvananh/indictts-english
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
```yaml
data_cfg:
dataset:
processor:
feat_extractor_id: ${model_cfg.model.encoder_id}
tokenizer_id: ${model_cfg.tokenizer_id}
path:
base:
indict_tts: ../IndicTTS
cv: ../
train:
- train_data/indict_tts_train.jsonl
# - train_data/cv_train.jsonl
test:
- train_data/indict_tts_test.jsonl
# - train_data/cv_test.jsonl
dev:
- train_data/indict_tts_dev.jsonl
# - train_data/cv_dev.jsonl
dataloader:
batch_size: 46
num_workers: 8
pin_memory: True
model_cfg:
tokenizer_id: tuanio/wav2vec2-phoneme-ipa-ctc
model:
dropout: 0.1
encoder_id: tuanio/whisper-encoder.medium.en
optim:
lr: 1.25e-05
betas: [0.9, 0.998]
weight_decay: 0.01
scheduler:
name: linear
total_steps: -1
warmup_ratio: 0.05
interval: step
frequency: 1
trainer_cfg:
log:
wandb: True
logger_wandb:
project: aped_indian-lish
name: whisper-medium-indict-tts-only-from-epoch1
log_model: all
arguments:
accelerator: gpu
devices: -1
max_epochs: 10
log_every_n_steps: 1
enable_checkpointing: True
accumulate_grad_batches: 2
inference_mode: True
gradient_clip_val: 5.0
check_val_every_n_epoch: 1
val_check_interval: null
experiment_cfg:
train: True
valid: True
test: True
ckpt:
resume_ckpt: True
ckpt_path: ckpt/medium.epoch3.ckpt
```
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45 | squeeze-ai-lab | 2023-07-06T04:46:32Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:46:53Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
dangvansam/whisper-base-vi | dangvansam | 2023-07-06T04:09:35Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"vi",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T10:42:24Z | ---
language:
- vi
pipeline_tag: automatic-speech-recognition
--- |
wizardk/600mix | wizardk | 2023-07-06T04:07:38Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-06T03:48:41Z | ---
license: cc-by-nc-sa-4.0
---
|
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s45 | squeeze-ai-lab | 2023-07-06T03:58:19Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:47:10Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-inst-w3-s45 | squeeze-ai-lab | 2023-07-06T03:56:32Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:47:03Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
aroot/eng-guj-wsample.43a | aroot | 2023-07-06T03:44:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T03:21:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2191
- Bleu: 2.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zhundred/ppo-LunarLander-v2 | zhundred | 2023-07-06T03:38:13Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T03:37:29Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.86 +/- 20.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MWaleed/q-Taxi-v3 | MWaleed | 2023-07-06T03:23:27Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T03:23:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MWaleed/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BaoKien/deberta-base-finetuned-squad-v2 | BaoKien | 2023-07-06T03:22:36Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T01:19:43Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: deberta-base-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad-v2
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.753 | 1.0 | 8238 | 0.7286 |
| 0.5378 | 2.0 | 16476 | 0.7578 |
| 0.3881 | 3.0 | 24714 | 0.9221 |
### Performance
- 'exact': 81.84115219405373
- 'f1': 85.19125695340612
- 'total': 11873
- 'HasAns_exact': 80.24628879892038
- 'HasAns_f1': 86.95610556811602
- 'HasAns_total': 5928
- 'NoAns_exact': 83.43145500420522
- 'NoAns_f1': 83.43145500420522
- 'NoAns_total': 5945
- 'best_exact': 81.84115219405373
- 'best_exact_thresh': 0.9994916319847107
- 'best_f1': 85.19125695340657
- 'best_f1_thresh': 0.9994916319847107
- 'total_time_in_seconds': 294.34524957099984
- 'samples_per_second': 40.33698528277447
- 'latency_in_seconds': 0.024791143735450168
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AngelaBoadway/DustinBates | AngelaBoadway | 2023-07-06T03:19:17Z | 0 | 1 | transformers | [
"transformers",
"en",
"dataset:AngelaBoadway/DustinBates",
"doi:10.57967/hf/0859",
"endpoints_compatible",
"region:us"
] | null | 2023-07-06T01:00:15Z | ---
language:
- en
datasets:
- AngelaBoadway/DustinBates
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
D U S T I N B A T E S
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Angela Boadway
- **Language(s) (NLP):** English |
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s0 | squeeze-ai-lab | 2023-07-06T03:15:32Z | 0 | 1 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:33:19Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s0 | squeeze-ai-lab | 2023-07-06T03:14:48Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:31:51Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s0 | squeeze-ai-lab | 2023-07-06T03:14:31Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:31:15Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
h2oai/h2ogpt-research-oasst1-llama-65b | h2oai | 2023-07-06T03:11:31Z | 1,502 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/openassistant_oasst1_h2ogpt_graded",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-13T18:11:13Z | ---
license: other
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/openassistant_oasst1_h2ogpt_graded
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-research-oasst1-llama-65b` is a 65 billion parameter instruction-following large language model (NOT licensed for commercial use).
- Base model: [decapoda-research/llama-65b-hf](https://huggingface.co/decapoda-research/llama-65b-hf)
- Fine-tuning dataset: [h2oai/openassistant_oasst1_h2ogpt_graded](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt_graded)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/llama-65b-hf.h2oaiopenassistant_oasst1_h2ogpt_graded.1_epochs.113510499324f0f007cbec9d9f1f8091441f2469.3.zip)
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the following libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.1
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
generate_text = pipeline(model="h2oai/h2ogpt-research-oasst1-llama-65b", tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 8192, padding_idx=31999)
(layers): ModuleList(
(0-79): 80 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=8192, out_features=8192, bias=False)
(k_proj): Linear(in_features=8192, out_features=8192, bias=False)
(v_proj): Linear(in_features=8192, out_features=8192, bias=False)
(o_proj): Linear(in_features=8192, out_features=8192, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=8192, out_features=22016, bias=False)
(down_proj): Linear(in_features=22016, out_features=8192, bias=False)
(up_proj): Linear(in_features=8192, out_features=22016, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=8192, out_features=32000, bias=False)
)
```
## Model Configuration
```json
LlamaConfig {
"_name_or_path": "h2oai/h2ogpt-research-oasst1-llama-65b",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 0,
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 1,
"hidden_act": "silu",
"hidden_size": 8192,
"initializer_range": 0.02,
"intermediate_size": 22016,
"max_position_embeddings": 2048,
"max_sequence_length": 2048,
"model_type": "llama",
"num_attention_heads": 64,
"num_hidden_layers": 80,
"pad_token_id": -1,
"rms_norm_eps": 1e-05,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.1",
"use_cache": true,
"vocab_size": 32000
}
```
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
TBD
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Bellaaazzzzz/models_fill | Bellaaazzzzz | 2023-07-06T02:41:19Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-06T02:35:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Bellaaazzzzz/models_fill
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
Validation result of 1 round.

Validation result of 2 round.

|
asenella/mmnist_JMVAEconfig_resnet_seed_0_ratio_0_c | asenella | 2023-07-06T02:16:52Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-06T02:16:24Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Subsets and Splits