modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Word2vec/nlpl_220 | Word2vec | 2023-07-06T07:30:44Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:01:16Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249333 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 10 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_220", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/220.zip |
NTQAI/pedestrian_gender_recognition | NTQAI | 2023-07-06T07:29:58Z | 45,879 | 15 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"beit",
"image-classification",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-01-06T04:37:51Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: outputs
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9107332624867163
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the [PETA dataset](http://mmlab.ie.cuhk.edu.hk/projects/PETA_files/Pedestrian%20Attribute%20Recognition%20At%20Far%20Distance.pdf) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2170
- Accuracy: 0.9107
## Model description
More information needed
#### How to use
You can use this model with Transformers *pipeline* .
```python
from transformers import pipeline
gender_classifier = pipeline(model="NTQAI/pedestrian_gender_recognition")
image_path = "abc.jpg"
results = gender_classifier(image_path)
print(results)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5193 | 1.0 | 2000 | 0.3346 | 0.8533 |
| 0.337 | 2.0 | 4000 | 0.2892 | 0.8778 |
| 0.3771 | 3.0 | 6000 | 0.2493 | 0.8969 |
| 0.3819 | 4.0 | 8000 | 0.2275 | 0.9100 |
| 0.3581 | 5.0 | 10000 | 0.2170 | 0.9107 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]). |
Word2vec/nlpl_206 | Word2vec | 2023-07-06T07:29:52Z | 0 | 0 | null | [
"word2vec",
"pol",
"dataset:Polish_CommonCrawl_Dump_of_December_2019",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T08:09:12Z | ---
language: pol
license: cc-by-4.0
tags:
- word2vec
datasets: Polish_CommonCrawl_Dump_of_December_2019
---
## Information
A word2vec model trained by Krzysztof Wolk ([email protected]) on a vocabulary of size 4885806 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 100.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_206", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/206.zip |
NTQAI/pedestrian_age_recognition | NTQAI | 2023-07-06T07:28:59Z | 110,387 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"beit",
"image-classification",
"vision",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-01-09T03:36:33Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pedestrian_age_recognition_local
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8073394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pedestrian_age_recognition_local
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5004
- Accuracy: 0.8073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8849 | 1.0 | 2008 | 0.7939 | 0.6807 |
| 0.9836 | 2.0 | 4016 | 0.6694 | 0.7336 |
| 0.8128 | 3.0 | 6024 | 0.5768 | 0.7668 |
| 0.7611 | 4.0 | 8032 | 0.5541 | 0.7833 |
| 0.6441 | 5.0 | 10040 | 0.5473 | 0.7773 |
| 0.5696 | 6.0 | 12048 | 0.5187 | 0.7971 |
| 0.6925 | 7.0 | 14056 | 0.5082 | 0.8038 |
| 0.5711 | 8.0 | 16064 | 0.5092 | 0.8098 |
| 0.7741 | 9.0 | 18072 | 0.5026 | 0.8020 |
| 0.5269 | 10.0 | 20080 | 0.5004 | 0.8073 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]). |
Word2vec/nlpl_184 | Word2vec | 2023-07-06T07:28:01Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_News",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:55:10Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_News
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249318 corresponding to 2550000000 tokens from the dataset `Russian_News`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_184", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/184.zip |
Word2vec/nlpl_183 | Word2vec | 2023-07-06T07:27:39Z | 0 | 0 | null | [
"word2vec",
"rus",
"dataset:Russian_National_Corpus",
"license:cc-by-4.0",
"region:us"
] | null | 2023-07-05T07:54:53Z | ---
language: rus
license: cc-by-4.0
tags:
- word2vec
datasets: Russian_National_Corpus
---
## Information
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 248118 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`.
The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300.
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_183", filename="model.bin"), binary=True, unicode_errors="ignore")
```
## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7
This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/183.zip |
digiplay/Zevinemix_v1.0 | digiplay | 2023-07-06T07:24:33Z | 255 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T04:38:41Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/103015?modelVersionId=110251
Sample image I made :


Original Author's DEMO images :




|
atiiisham988/distilbert-base-uncased-finetuned-imdb | atiiisham988 | 2023-07-06T07:01:14Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T06:57:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
afaan00733/my_awesome_model | afaan00733 | 2023-07-06T06:56:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T21:18:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6546
- Accuracy: 0.4737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6732 | 0.4737 |
| No log | 2.0 | 4 | 0.6546 | 0.4737 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rohanbalkondekar/spicy-caiman | rohanbalkondekar | 2023-07-06T06:55:23Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-06T06:48:59Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.1
pip install accelerate==0.20.3
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="BeRohan/spicy-caiman",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"BeRohan/spicy-caiman",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"BeRohan/spicy-caiman",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BeRohan/spicy-caiman" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/spicy-caiman --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Broonion/RLcourse-pb-cartport | Broonion | 2023-07-06T06:53:57Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T06:53:45Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RLcourse-pb-cartport
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
IliyanGochev/whisper-small-bg | IliyanGochev | 2023-07-06T06:50:12Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bg",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T08:04:03Z | ---
language:
- bg
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: whisper-small-bg
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0 bg
type: mozilla-foundation/common_voice_13_0
config: bg
split: test
args: bg
metrics:
- name: Wer
type: wer
value: 44.67291341315287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-bg
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 bg dataset.
It achieves the following results on the evaluation set:
- Loss: 9.0612
- Wer: 44.6729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.9319 | 6.76 | 1000 | 10.0774 | 73.9892 |
| 2.6116 | 13.51 | 2000 | 11.4089 | 67.0484 |
| 0.9607 | 20.27 | 3000 | 11.8266 | 60.9448 |
| 0.3464 | 27.03 | 4000 | 9.9500 | 52.1213 |
| 0.0122 | 33.78 | 5000 | 9.0612 | 44.6729 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JennnDexter/pokemon-lora | JennnDexter | 2023-07-06T06:44:42Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-12T06:24:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - JennnDexter/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
aroot/eng-mya-simcse_central | aroot | 2023-07-06T06:36:12Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T06:14:05Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_central
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_central
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8980
- Bleu: 4.1973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hchung1017/aihub_012_streaming_transformer | hchung1017 | 2023-07-06T06:35:19Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"ko",
"dataset:aihub_012",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2023-07-06T06:33:08Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ko
datasets:
- aihub_012
license: cc-by-4.0
---
## ESPnet2 ASR model
### `hchung1017/aihub_012_streaming_transformer`
This model was trained by hchung1017 using aihub_012 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout f4d7fead71e2a99541a8d3d66d6e00a33d9e82df
pip install -e .
cd egs2/aihub_012/asr1
./run.sh --skip_data_prep false --skip_train true --download_model hchung1017/aihub_012_streaming_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Jun 22 19:10:44 KST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `f4d7fead71e2a99541a8d3d66d6e00a33d9e82df`
- Commit date: `Wed May 24 14:58:35 2023 -0400`
## exp/asr_train_asr_streaming_transformer_raw_ko_bpe5000_sp/decode_asr_streaming_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|3794053|89.3|9.3|1.3|1.5|12.1|29.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|17636048|94.6|3.1|2.4|1.7|7.2|29.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|797676|4325914|87.8|8.3|3.9|1.5|13.8|29.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_transformer_raw_ko_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 32945
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- cer_ctc
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 35000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ko_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_ko_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/train_sp/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /data/dump/aihub_012/raw/dev/wav.scp
- speech
- sound
- - /data/dump/aihub_012/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁I
- ▁YOU
- ''''
- S
- ▁WHAT
- ▁A
- ▁IT
- ▁TO
- ▁IS
- ▁THE
- ▁ARE
- ▁CAN
- ▁OKAY
- ▁YES
- ▁DO
- ▁THAT
- ▁SEE
- T
- ▁HE
- ▁HOW
- ▁ME
- ▁HAVE
- ▁MY
- ▁GOOD
- ▁REALLY
- ▁SO
- ▁FOR
- ▁AM
- ▁SURE
- ▁OH
- ▁GO
- ▁WHY
- ▁NO
- ▁YOUR
- ▁RIGHT
- ▁HELP
- ’
- ▁DON
- ▁NOT
- ▁HI
- ▁HERE
- ▁DID
- ▁LIKE
- ▁AND
- ▁TOO
- ▁SHE
- ▁THIS
- ▁HELLO
- M
- ▁KNOW
- ▁WANT
- RE
- ▁NEED
- ▁WILL
- ▁ABOUT
- ▁THERE
- ▁LET
- ▁OF
- ▁IN
- ▁BE
- ▁BUT
- ▁THINK
- ▁SOMETHING
- ▁LOOK
- ▁NOW
- ▁NICE
- ▁THEN
- ▁
- ▁WE
- ▁GREAT
- ▁THANK
- ▁WITH
- ▁TELL
- ▁PROBLEM
- ▁HER
- ▁GOING
- ▁WAS
- ▁DOING
- ▁ASK
- ▁THANKS
- ▁HEY
- ▁BACK
- ▁WRONG
- ▁THEY
- ▁ON
- ▁HIM
- ▁UP
- ▁AT
- LL
- ▁WELL
- ▁GET
- ▁WHERE
- VERY
- ▁SOME
- ▁PEOPLE
- ▁ALL
- ▁MEAN
- ▁PLEASE
- ▁TIME
- ▁WHO
- ▁GOT
- ▁WELCOME
- ▁MAKE
- ▁COME
- ▁MEET
- ▁NEW
- ▁LOT
- ▁MOM
- ▁SAID
- ▁SHOULD
- ▁HAPPY
- ▁HIS
- ▁BUSY
- ▁BYE
- ▁QUESTION
- ▁SAY
- ▁TAKE
- ▁MORE
- ▁SORRY
- ▁IDEA
- ▁OUT
- ▁FINE
- ▁PLAY
- ▁ANY
- ▁AGAIN
- ▁BECAUSE
- ▁FROM
- ▁AN
- ▁WHEN
- ▁TRY
- ▁HAS
- ▁TODAY
- ▁READY
- ▁HOPE
- ▁GIVE
- ▁BIG
- ▁FRIEND
- ▁WRITE
- ▁EAT
- ▁ONE
- ▁BAD
- ▁MUCH
- ▁SOON
- ▁MANY
- ED
- ▁THEM
- ▁ANGRY
- ▁LATER
- ING
- ▁MAYBE
- ▁DAD
- ▁FIND
- ▁DOWN
- ▁WORRY
- ▁SHOW
- ▁COURSE
- ▁DAY
- ▁SOUNDS
- ▁DOES
- ▁STRANGE
- ▁TALK
- ▁FUN
- ▁REMEMBER
- ▁ANYTHING
- ▁BUY
- ▁LETTER
- ▁JUST
- ▁MADE
- ▁READ
- ▁CANNOT
- ▁WANTS
- ▁WOW
- ▁DIDN
- ▁IF
- ▁GLAD
- ▁WAY
- ▁MUST
- ▁SCHOOL
- ▁BOOK
- ▁LOOKING
- ▁TOLD
- ▁NAME
- ▁HEAR
- ▁TOY
- ▁TRUE
- ▁TEACHER
- ▁US
- ▁WORK
- ▁TWO
- ▁SONG
- ▁HARD
- ▁LOVE
- ▁THINGS
- ▁SING
- ▁BETTER
- ▁HOME
- ▁LINKER
- ▁UNDERSTAND
- ▁LOOKS
- ▁KIND
- ▁HOUSE
- LUE
- ▁DRESS
- ▁BY
- ▁BEST
- ▁LONG
- ▁NEWS
- ▁WENT
- ▁HAPPENED
- ▁OLD
- ▁KEEP
- ▁NEXT
- ▁CHECK
- D
- ▁SPECIAL
- ▁USE
- ▁LIKES
- ▁EVERYTHING
- ▁FEEL
- ▁ROBOT
- ▁SAD
- ▁PLEASURE
- ▁JOE
- ▁COOL
- ▁TOMORROW
- ▁LUCK
- ▁DOESN
- ▁BOX
- ▁AROUND
- ▁HOMEWORK
- ▁ALWAYS
- ▁MORGAN
- ▁PUT
- ▁THESE
- ▁GAVE
- ▁HEARD
- ▁WAIT
- ▁PRESENT
- ▁SOMEONE
- ▁PARTY
- ▁BIRTHDAY
- ▁RANDY
- ▁FRIENDS
- ▁MONEY
- ▁DONE
- ▁CAR
- ▁COFFEE
- ▁MUSIC
- ▁BEN
- ▁BEEN
- ▁STILL
- ▁GREEN
- ▁STAR
- ▁PERSON
- ▁WERE
- ▁STORY
- ▁ELSE
- ▁IDEAS
- ▁TOGETHER
- ▁MILK
- ▁WOULD
- ▁SOUND
- ▁THAN
- ▁TALKED
- ▁EVERY
- ▁NEEDS
- ▁SAW
- ▁HAIR
- ▁CHANGE
- ▁WORRIED
- ▁EASY
- ▁FOOD
- ▁DOG
- VE
- ▁CONCERT
- ▁MAKING
- ▁MONSTER
- ▁BOY
- ▁PHOTO
- ▁SCARY
- ▁RED
- ▁BROTHER
- ▁FIRST
- ▁DANCE
- ▁BEFORE
- ▁PRETTY
- ▁DRINK
- ▁WISH
- ▁HARRY
- ▁CALM
- ▁CAT
- ▁WEAR
- ▁BLUE
- ▁MESSAGE
- ▁TRUST
- ▁ONLY
- ▁HAD
- ▁THREE
- ▁AWAY
- ▁MIND
- ▁MAKES
- ▁GRANDMOTHER
- ▁WATCH
- ▁EMMA
- ▁AMY
- ▁TIRED
- ▁CLASS
- ▁MAN
- ▁DAN
- ▁COULD
- ▁BRING
- ▁SMALL
- ▁ANYWAY
- ▁OUR
- ▁ROOM
- ▁AFTER
- ▁BELIEVE
- ▁BOOKS
- ▁TEN
- ▁DEVILMON
- ▁JOB
- ▁OVER
- ▁COMING
- ▁STOP
- ▁FUNNY
- ▁DIANA
- ▁TOYS
- ▁FAST
- ▁MORNING
- ▁NUMBER
- ▁NOTHING
- ▁TOWN
- ▁OPEN
- ▁OTHER
- ▁PHONE
- ▁CARE
- ▁LEAVE
- ▁CONTEST
- ▁WOODY
- ▁THINKING
- Y
- ▁ANOTHER
- A
- ▁ENGLISH
- ▁SICK
- ▁BRAVE
- ▁TROY
- ▁EATING
- ▁SLEEP
- ▁THEIR
- ▁SELL
- ▁DELICIOUS
- ▁OFF
- ▁WATER
- ▁PICTURE
- ▁CAME
- ▁EVERYONE
- ▁PAPER
- ▁PARK
- ▁PAINT
- ▁SHOP
- ▁CREAM
- ▁TV
- ▁BOUGHT
- ▁CAREFUL
- ▁ROBBY
- ▁FOUND
- ▁STONE
- ▁SISTER
- ▁HURRY
- ▁BAG
- ▁WAKE
- ▁SYRUP
- ▁DRAW
- ▁ENERGY
- ▁SHOES
- ▁IMPORTANT
- ▁NEVER
- ▁LISTEN
- ▁WON
- ▁DOOR
- ▁POP
- ▁LAST
- ▁DIFFERENT
- ▁FISH
- ▁SAVE
- ▁HEALTHY
- ▁UNCLE
- ▁NIGHT
- UCH
- ▁PLACE
- ▁DARK
- ▁GUESS
- ▁LATE
- ▁PIE
- N
- ▁PRACTICE
- ▁MONICA
- ▁ANYONE
- ▁READING
- ▁COLOR
- ▁SALLY
- ▁BLACK
- ▁MOVIE
- ▁TROUBLE
- ▁COLD
- ▁STUDY
- ▁LITTLE
- ▁WHITE
- ▁CHEER
- ▁SCARED
- ▁POSTER
- ▁TALKING
- ▁TEACH
- ▁WALK
- ▁CAKE
- ▁INTO
- ▁FIGHT
- ▁ALREADY
- ▁SLEEPY
- ▁STRONG
- ▁OLIVIA
- ▁CALL
- ▁WROTE
- ▁ICE
- ▁OR
- ▁SCOTT
- ▁LIBRARY
- ▁NANCY
- ▁LUMY
- ▁HAT
- ▁YET
- ▁ALEX
- ▁SHORT
- ▁CLOTHES
- ▁YESTERDAY
- ▁FAVORITE
- ▁SWEET
- ▁FIVE
- ▁HOLD
- ▁LUNCH
- ▁PLAYING
- ▁GARY
- ▁HANDS
- ▁LEFT
- ▁ASKED
- ▁CHEESE
- ▁FACE
- ▁BORROW
- ▁SPEAK
- ▁INTERESTING
- ▁MAY
- ▁BEAR
- ▁SIGN
- ▁SHADOW
- ▁FLOWERS
- ▁PINO
- ▁ERIN
- ▁FOREST
- ▁GAME
- ▁MR
- ▁WANTED
- ▁RUN
- ▁SPELL
- ▁PEN
- ▁SHOPPING
- ▁COOK
- ▁DAYS
- ▁BED
- ▁BEAUTIFUL
- ▁MUSEUM
- ▁CLEAN
- ▁REST
- ▁SAME
- ▁DOCTOR
- ▁YOURSELF
- ▁DINNER
- ▁DANGEROUS
- ▁SECRET
- ▁STORE
- ▁TREE
- ▁MIGHT
- ▁MAYOR
- ▁CHARLIE
- ▁PIZZA
- ▁FOUR
- ▁SIR
- ▁SEEN
- ▁TURN
- ▁ENJOY
- ▁CLARA
- ▁ANYTIME
- ▁LIVE
- ▁LOST
- ▁SANDRA
- ▁DURING
- ▁MYSELF
- ▁TALL
- ▁MINE
- ▁CHOOSE
- ▁TOOK
- ▁WAITING
- ▁S
- ▁SUNNY
- ▁SINGING
- ▁ACADEMY
- ▁AHEAD
- ▁HURT
- ▁CLOCK
- ▁PAINTING
- ▁RAN
- ▁ALONE
- ▁USED
- ▁PLAN
- ▁THEATER
- ▁HAND
- ▁WEEK
- ▁CATCH
- ▁SEND
- ▁CUBE
- ▁ERIC
- ▁WOOD
- ▁HOT
- ▁DEVILMONS
- ▁FREE
- ▁STAY
- ▁PROMISE
- ▁RULE
- ▁HUNGRY
- ▁WORKING
- ▁HAPPEN
- ▁VIKI
- ▁FAMILY
- ▁CHICKEN
- ▁FORGET
- ▁YELLOW
- ▁BROWN
- ▁VACATION
- ▁KELLY
- ▁JACK
- ▁SINGER
- ▁HAMMER
- ▁SAYS
- ▁TRAIN
- ▁FIX
- ▁CUTE
- ▁EVEN
- ▁SANTA
- ▁SLEEPING
- ▁BUS
- ▁BARBECUE
- ▁AGREE
- ▁COULDN
- ▁MISS
- E
- ▁GRACE
- ▁TRASH
- ▁BABY
- ▁LUMA
- ▁CHILDREN
- ▁EXCUSE
- ▁DPOP
- ▁OUTSIDE
- ▁ORDER
- ▁MATTER
- ▁RIDE
- ▁SUMMER
- ▁CLOSE
- ▁MOVE
- ▁JUICE
- ▁TOUCH
- ▁CARD
- ▁THOSE
- ▁HAIRSTYLE
- ▁RICH
- ▁BREAK
- ▁ANYMORE
- ▁TRIP
- ▁EYES
- ▁LEARN
- IC
- ▁YOUNGER
- ▁SMELLS
- ▁CHRIS
- ▁ITEMS
- ▁STONES
- ▁CUT
- ▁STUDENT
- ▁CALLED
- ▁SHINE
- ▁ATE
- ▁PERFECT
- ▁BETIA
- ▁MOVING
- LY
- ▁FIRE
- ▁D
- ▁CHRISTMAS
- ▁RUNNING
- ▁LINE
- ▁JACKET
- ▁WHICH
- ▁GIFT
- ▁SMILE
- ▁WEARING
- ▁STELLA
- ▁SEVEN
- ▁ANSWER
- ▁YEAR
- ▁MOST
- ▁WENDY
- RA
- ▁BALL
- ▁THING
- ▁FIFTY
- ▁YOUNG
- ▁FRONT
- ▁LIKED
- ▁WINDOW
- ▁BEING
- ▁RICE
- ▁HOBBY
- ▁BRUCE
- ▁ALVIN
- ▁CHAIR
- ▁ELEVEN
- ▁INTERVIEW
- ▁TRUMPET
- ▁DRAWING
- ▁WHILE
- ▁HAV
- ▁NEWSPAPER
- ▁WRITING
- ▁FRUIT
- ▁BEHIND
- ▁EVENT
- ▁HAVEN
- ▁BELLOW
- ▁YEARS
- ▁DIV
- ▁VICTORIA
- ▁SENT
- ▁STYLE
- ▁LUNA
- ▁AUNT
- ▁DREAM
- ▁PICTURES
- ▁LEO
- ▁QUESTIONS
- ▁PRICE
- ▁APPLE
- ▁SCHEDULE
- ▁TABLE
- ▁PLANT
- ▁BELL
- ▁SUSAN
- ▁SHIRT
- ▁GRANDFATHER
- ▁EXPENSIVE
- ▁GUYS
- ▁THOUGHT
- ▁OSCAR
- ▁TIMES
- ▁ACTUALLY
- ▁CHANCE
- ▁PAY
- ▁WASH
- ▁JUGGLING
- ▁JULIA
- ▁MAKEUP
- ▁PIANO
- ▁GOES
- ▁QUIZ
- ▁OFTEN
- ▁THIRTY
- ▁SMART
- ▁WEEKEND
- ▁CHOCOLATE
- ▁BATHROOM
- ▁CANDY
- ▁SPEECH
- ▁FEELING
- ▁RADIO
- ▁HECTOR
- ▁KNOWS
- ▁GRANDMA
- ▁SEEM
- ER
- ▁START
- ▁PENCIL
- ▁SUNDAY
- ▁WORD
- ▁MOUSE
- ▁PLAYGROUND
- ▁BREAD
- ▁MAGIC
- ▁CD
- ▁BROKEN
- ▁COLIN
- ▁DIRTY
- ▁MOTHER
- ▁DESK
- ▁BORING
- ▁SOUP
- ▁ONCE
- ▁WORKED
- ▁COUNT
- ▁EXCITED
- ▁PARADE
- ▁GUITAR
- ▁PM
- ▁FINISH
- ▁BLOCK
- ▁FISHING
- ▁VOICE
- ▁ROGER
- ▁WORKS
- ▁PLAYER
- ▁GLASSES
- ▁LAB
- ▁SIGH
- ▁LOVES
- ▁MODEL
- ▁EXERCISE
- ▁O
- ▁POINT
- ▁SWIMMING
- ▁MARKET
- ▁NOTE
- ▁SECOND
- ▁LUCKY
- ▁BROKE
- ▁CAVE
- ▁SHALL
- ▁KID
- ▁HANG
- ▁MICHAEL
- ▁DANCING
- ▁COM
- ▁MASK
- TING
- ▁KYLE
- ▁FRIDAY
- ▁MELOD
- ▁DOUGLAS
- ▁ENOUGH
- ▁LEARNED
- ▁ALICE
- ▁NEWSPAPERS
- ▁NEAR
- ▁GIRL
- ▁LAURA
- ▁BANK
- ▁ORANGE
- ▁HEART
- ▁SNACKS
- ▁BANANA
- ▁AFRAID
- ▁NOISE
- ▁AARON
- ▁SIDE
- ▁POSSIBLE
- ▁ISN
- ▁UPSET
- ▁KATHY
- ▁ENTER
- ▁STATUE
- ▁FAVOR
- ▁CAPSULE
- ▁CLUB
- ▁BORED
- ▁STREET
- ▁FAR
- ▁BROUGHT
- ▁HENRY
- ▁BRIAN
- ▁FLOOR
- ▁RECORD
- ▁SUN
- ▁BORN
- ▁GONE
- ▁ELEPHANT
- ▁FATHER
- ▁BEAT
- ▁MISTAKE
- NY
- ▁MEGAN
- ▁JIN
- ▁CARL
- ▁FACTORY
- ▁HORSE
- ▁STANLEY
- ▁WIN
- ▁AFTERNOON
- ▁LIVED
- ▁HIGH
- ▁LEAVING
- ▁MINUTES
- ▁WALL
- ▁SURPRISE
- ▁DAVID
- ▁TWENTY
- ▁BIRD
- ▁NICK
- ▁REASON
- ▁OWN
- ▁STEVE
- ▁LADY
- ▁COMES
- ▁STATION
- ▁DOLL
- ▁JADE
- ▁STAND
- ▁FAMOUS
- ▁PLAYED
- ▁TSHIRT
- ▁HUEY
- ▁SEA
- ▁SIX
- ▁REPORT
- ▁POPULAR
- ▁PICK
- ▁TONY
- ▁TINA
- ▁KIDS
- ▁WEATHER
- ▁TREES
- ▁TIFFANY
- ▁WONDERFUL
- ▁RING
- ▁SOMEWHERE
- ▁LIGHT
- ▁NOSE
- ▁AUDREY
- ▁CAMERA
- ▁GARDEN
- ▁SOCCER
- ▁PIG
- ▁FRESH
- ▁NOBODY
- ▁AMANDA
- ▁SURPRISED
- ▁STOPPED
- ▁CITY
- ▁KOREAN
- ▁HISTORY
- ▁STUDENTS
- ▁COOKING
- L
- ▁LOUD
- ▁LOSE
- ▁PINK
- ▁LIE
- ▁CRAYONS
- ▁HEALTH
- ▁HANDWRITING
- ▁JOIN
- ▁THROW
- ▁INFORMATION
- ▁DIFFICULT
- ▁SOMETIMES
- ▁BIKE
- ▁WOMAN
- ▁FLOWER
- ▁WORDS
- ▁GHOST
- ▁RICKY
- R
- ▁TEETH
- ▁SAYING
- ▁PIECE
- ▁DR
- ▁CHANGED
- ▁SIT
- ▁ARTICLE
- ▁ARM
- ▁BECOME
- ▁MONKEY
- ▁YEAH
- ▁JUDY
- ▁FOLLOW
- ▁ALSO
- ▁GAMES
- ▁BAND
- ▁COMPUTER
- ▁ANDRE
- ▁EATS
- ▁MATH
- ▁EXACTLY
- ▁ART
- ▁JUMP
- ▁FOODS
- ▁PRESENTS
- ▁RABBIT
- ▁SMELL
- ▁HEAVY
- ▁SWIM
- ▁RICHARD
- ▁GRASS
- ▁BOTHER
- ▁PANTS
- ES
- ▁ALMOST
- ▁HELPING
- ▁ZOO
- ▁SHOULDN
- ▁FAN
- ▁EGGS
- ▁ELLA
- ▁RESTAURANT
- ▁CHIPS
- ▁BIGGER
- ▁MONDAY
- ▁CATS
- ▁STUDYING
- ▁TONIGHT
- ▁BRADY
- ▁SERIOUS
- ▁FORGOT
- ▁VISIT
- ▁BUILDING
- ▁SET
- ▁HANDSOME
- ▁CLAUS
- ▁RALPH
- ▁COMPANY
- ▁SEAT
- ▁ANDREW
- ▁WITHOUT
- EN
- ▁MEAT
- ▁BOARD
- ▁CLASSES
- ▁FLY
- ▁BIT
- ▁ANGELA
- ▁POLICE
- ▁BET
- ▁FINISHED
- ▁EITHER
- ▁SKY
- ▁POLIA
- ▁EIGHT
- ▁AMAZING
- ▁INSIDE
- ▁SATURDAY
- ▁DINOSAUR
- ▁DEVERYTHING
- ▁BRUSH
- ▁VIVIEN
- ▁BREAKFAST
- ▁QUICKLY
- ▁HEAD
- ▁CAROL
- ▁EACH
- ▁BANANAS
- ▁JAZZ
- ▁OWEN
- ▁LEAVES
- ▁HELPED
- ▁WINTER
- ▁REAL
- ▁TRUTH
- ▁RIVER
- ▁ROAD
- ▁ANNA
- ▁INTERESTED
- ▁EVERYBODY
- ▁HIMSELF
- ▁TAKES
- ▁LADDER
- ▁BOTH
- ▁CLASSROOM
- ▁STUDIED
- ▁HALL
- MAS
- ▁STARTED
- ▁THO
- ▁REFUND
- ▁EARLY
- ▁MARK
- ▁TRIED
- ▁CRY
- ▁CUP
- ▁DEAL
- ▁LEGS
- ▁PARTNER
- ▁NINE
- ▁MONTH
- ▁CRYSTAL
- ▁MRS
- ▁WHOM
- ▁QUIET
- ▁TICKET
- ▁TRYING
- ▁JELLY
- ▁TEST
- ▁OFFICE
- ▁BICYCLE
- ▁HOSPITAL
- ▁POOL
- ▁DOGS
- ▁LIVES
- ▁NOISY
- ▁TASTE
- ▁FEET
- ▁PASTA
- ▁HANS
- AL
- ▁PAST
- ▁PRIZE
- ▁KEY
- ▁COUPON
- ▁TIMMY
- ▁AREN
- ▁MEMO
- ▁TEACHE
- ▁PRACTICING
- ▁ANIMAL
- ▁MOUTH
- ▁WORLD
- ▁UNDER
- ▁WATCHING
- ▁FELL
- ▁DRIVE
- ▁BEACH
- ▁CLEAR
- ▁JOKES
- ▁GAVIN
- ▁ADD
- CLOCK
- ▁HELPER
- ▁JULIE
- ▁WEIRD
- ▁SINCE
- ▁MILLER
- ▁TIE
- ▁FRUITS
- ▁HOUR
- ▁ANIMALS
- ▁TWICE
- ▁WARM
- ▁LARGE
- ▁UNTI
- ▁JAMES
- ▁DOLLARS
- ▁STORIES
- ▁MEAL
- ▁APPLES
- ▁CRYING
- ▁DIET
- ▁HEADPHONES
- ▁MEMORI
- ▁COMPLIMENT
- ▁TRIANGLE
- ▁DIARY
- ▁TOWER
- ▁EYE
- ▁SALE
- ▁BUILT
- ▁CARROT
- ▁ORDERED
- ▁ITEM
- ▁SLOW
- ▁NAOMI
- ▁TUESDAY
- ▁SENSE
- ▁PARENTS
- ▁GIV
- ▁BUSINESS
- ▁EVER
- ▁TYLER
- ▁FORWARD
- ▁CELL
- ▁SHUT
- ▁COAT
- ▁PRINCE
- ▁HATE
- ▁PUPPET
- ▁FULL
- ▁WOULDN
- ▁TERRIBLE
- ▁CARDS
- ▁MAP
- ▁STAMP
- ▁SNACK
- ▁SNOW
- ▁RUBY
- ▁SLOWLY
- ▁EDDY
- ▁EASILY
- ▁LAZY
- ▁BLOCKS
- ▁EARS
- ▁COLORS
- ▁TTEOKBOKKI
- ▁CAREFULLY
- ▁MARRIED
- ▁VILLAGE
- ▁HEADACHE
- ▁MOUNTAIN
- ▁PETER
- ▁FAT
- ▁MARRY
- WEEN
- ▁RYAN
- ▁DISHES
- ▁JIM
- ▁FIELD
- ▁CINDY
- ▁FEW
- ▁STARS
- ▁UMBRELLA
- ▁GROW
- ▁FROG
- ▁RULER
- ▁BASKETBALL
- ▁PART
- ▁ORLANDO
- ▁CORRECT
- ▁GRANDPA
- ▁ADVICE
- ▁ARMS
- SE
- ▁PHOTOS
- ▁KICKBOARD
- ▁JACOB
- ▁DANGER
- ▁BOOTS
- ▁GIANT
- ▁BATH
- ▁VISITOR
- ▁PROMISED
- ▁SNAKE
- ▁GLASS
- ▁RAISE
- ▁SPICY
- ▁TURNED
- ▁MEETING
- ▁VIOLIN
- ▁MINUTE
- ▁DAISY
- ▁BUTTON
- ▁OTHERS
- ▁DELIVERY
- ▁WASN
- ▁JOGGING
- ▁SOFA
- ▁FINGERS
- ▁NICOLE
- ▁TALLER
- ▁RUNS
- ▁BENJAMIN
- ▁GOLD
- ▁LUCAS
- ▁SNOWMAN
- ▁LOVED
- ▁SANDWICH
- ▁STRAIGHT
- ▁AGAINST
- ▁BALLOONS
- ▁KEPT
- ▁CLOSED
- ▁PENS
- ▁MAX
- ▁LEG
- ▁FILL
- ▁QUIT
- ▁ANYBODY
- ▁JEFF
- ▁ANN
- ▁EVAN
- ▁MISSED
- ▁TAEKWONDO
- ▁JOY
- ▁PUSH
- ▁WOODWARD
- ▁ROSS
- ▁LISA
- ▁PULL
- ▁NECTAR
- ▁VASE
- ▁RABBITS
- ▁BOW
- ▁BUGS
- ▁SAFE
- GETTING
- ▁CASH
- ▁LAMP
- ▁DOLLS
- ▁YUMMY
- ▁MEDICINE
- ▁SPORTS
- ▁ENDS
- ▁BASEBALL
- ▁THROUGH
- ▁CENTER
- ▁FIGHTER
- ERS
- ▁PACKAGE
- ▁WORMS
- ▁SHAPE
- ▁DISAPPOINTED
- ▁PHILLIP
- ▁DINOSAURS
- ▁SALAD
- ▁HAMBURGER
- ▁COOKIES
- ▁PASS
- ▁CHEAP
- ▁STAGE
- ▁COLORED
- ▁TYPE
- ▁EVENING
- ▁CRIED
- ▁SHOWER
- ▁WALLET
- ▁FIFTEEN
- ▁HERO
- ▁USUALLY
- ▁GATE
- ▁TEAM
- ▁PLANE
- ▁DRESSES
- ▁SOLD
- ▁CRAYON
- LE
- ▁HIDE
- ▁BODY
- ▁MEN
- ▁HAIRSTYLES
- ▁BOAT
- ▁WONDER
- ▁RAIN
- ▁FEELS
- ▁NERVOUS
- ▁CHILD
- ▁MIRROR
- ▁BUG
- ▁LONGER
- ▁LOUIS
- ▁AIR
- ▁STOMACHACHE
- ▁ASKING
- ▁OWNER
- ▁KNEW
- ▁BELT
- I
- ▁MAGAZINE
- ▁HOP
- ▁SUGAR
- ▁END
- ▁TAKING
- ▁LIGHTS
- ▁EMPTY
- ▁PUPPY
- ▁DUCK
- ▁SUPERMARKET
- ▁APARTMENT
- ▁ADDRESS
- ▁MACHINE
- ▁JASON
- ▁CARRY
- ▁DRY
- ▁EXCITING
- ▁BOTTLE
- ▁RIDING
- ▁CHARCOAL
- ▁TRAVIS
- ▁UGLY
- ▁CAUGHT
- ▁PROBAB
- ▁PROJECT
- ▁LISTENING
- ▁JUGGLE
- ▁ROPE
- ▁BILL
- ▁HOURS
- ▁MOLLY
- ▁SOPHIE
- ▁WEARS
- ▁LIFE
- ▁CAFE
- ▁HURTS
- ▁RELAX
- ▁TED
- ▁COPY
- ▁COTTON
- ▁ALONG
- ▁OFFER
- ▁DATE
- ▁LI
- ▁YOUTUBE
- ▁JOKE
- ▁BARREL
- ▁DIED
- ▁SINGS
- ▁SEVERAL
- ▁TALENT
- ▁CARTER
- ▁PASSWORD
- ▁CASE
- ▁SCISSORS
- ▁YORK
- ▁FANTASTIC
- ▁CLOUDY
- ▁ROUND
- ▁BUILD
- ▁PRINCESS
- ▁RAINY
- ▁GRAPES
- ▁SKIRT
- ▁LION
- ▁FASTER
- ▁FASHION
- ▁AD
- ▁EXPLAIN
- ▁DOCK
- ▁MATCH
- ▁BOMB
- ▁STADIUM
- ▁WOODS
- ▁FALL
- ▁MAD
- ▁TRUCK
- ▁STEP
- ▁ANSWERS
- ▁KIDDING
- ▁MOON
- ▁BEAN
- ▁PICKED
- ▁LESSON
- ▁KNOWN
- ▁HAPPENING
- ▁BLUEBERRIES
- ▁SANDWICHES
- ▁BUTTER
- ▁BEDROOM
- ▁ABOVE
- ▁LEGO
- ▁HELENA
- ▁FOOTPRINT
- ▁SHIP
- ▁TAP
- ▁HILL
- ▁CHURCH
- ▁GOODBYE
- ▁LEMON
- ▁HUNDRED
- ▁COWARD
- ▁ARRIVED
- ▁WATERMELON
- ▁BOXES
- ▁FINALLY
- ▁MAIN
- ▁KEVIN
- BINGO
- ▁BONES
- ▁SPOKE
- ▁DONUTS
- ▁HENNA
- ▁LETTERS
- ▁PAM
- ▁LESS
- ▁WEDDING
- ▁POCKET
- ▁SHY
- ▁NOWHERE
- ▁MIC
- ▁NAMES
- ▁SONGS
- MED
- ▁DECIDED
- ▁KITCHEN
- ▁SHINING
- ▁LOVELY
- ▁SEASON
- ▁STEAK
- ▁DRUM
- ▁TEDDY
- ▁SHINY
- ▁GIRLS
- ▁AUDITION
- ▁ACTING
- ▁NECK
- ▁ROSA
- ▁SNEAKERS
- ▁SHOE
- ▁QUITE
- ▁HOTEL
- ▁LEATHER
- ▁WIND
- ▁COUSIN
- ▁JANET
- ▁ONIONS
- ▁DEAD
- ▁PROUD
- ▁PET
- ▁HELPFUL
- ▁TOILET
- ▁FORTY
- ▁JAKE
- ▁BUTTERFLY
- ▁KICK
- ▁BIRDS
- ▁ABROAD
- ▁TEA
- ▁STARTS
- ▁MEALS
- ▁AIRSHIPS
- ▁SOFT
- ▁MATT
- ▁BLANKET
- ▁WINDY
- ▁PLAYS
- ▁COVER
- ▁WEIGHT
- ▁PURPLE
- ▁HIDING
- ▁TAGS
- ▁F
- ▁WHATEVER
- ▁AIRSHIP
- ▁LIVING
- ▁MAT
- ▁KINDERGARTEN
- ▁POND
- ▁LAUNDRY
- O
- ▁NOTEBOOK
- ▁HELEN
- ▁SWEATER
- ▁TEACHING
- ▁FAULT
- ▁SQUARE
- ▁HONEST
- ▁LOUDER
- CAME
- ▁3
- ▁DROP
- ▁GUY
- ▁GIRLFRIEND
- ▁RAINING
- ▁SPIDER
- ▁FLYER
- ▁WATCHED
- ▁B
- ▁LOW
- ▁COUSINS
- ▁OLDER
- DY
- ▁ROCK
- ▁MOMENT
- ▁SHEET
- ▁LAUGH
- ▁BLUEBERRY
- ▁NEIGHBORHOOD
- ▁GRADE
- ▁STICKER
- ▁OPENING
- ▁ALRIGHT
- ▁OFFICER
- ▁PI
- ▁WEDNESDAY
- ▁BITE
- ▁CONTINUE
- TIME
- ▁SAIN
- ▁COSTUME
- ▁MOVED
- ▁BOOKCASE
- ▁DENTIST
- ▁STOPS
- ▁SAM
- ▁APRIL
- ▁THIRSTY
- ▁MOOD
- ▁PEA
- ▁ENTRY
- ▁SERVICE
- ▁ABLE
- ▁FRIED
- ▁W
- ▁FLASH
- ▁KATRINA
- ▁REPAIR
- ▁TI
- ▁GIMBAP
- NDA
- ▁ANNIVERSARY
- ▁NAMED
- ▁WRITTEN
- ▁CUSTOMERS
- ▁COLLECT
- ▁BONGOS
- ▁EGG
- ▁BAT
- ▁RIBS
- ▁SAT
- ▁RETURN
- LIGHT
- BACK
- CA
- NESS
- ▁FACES
- ▁CALLING
- ▁HOLIDAY
- ▁HOLE
- ▁MILLION
- ▁DELIVER
- ▁10
- ▁TAXI
- ▁HASN
- ▁MINDS
- ▁DONALD
- ▁MISTAKES
- ▁SPRING
- ▁MENTION
- ▁NEITHER
- ▁TOWEL
- ▁BEANS
- ▁WILLIAM
- ▁BRIGHT
- ▁STOMACH
- ▁CANDIES
- ▁BURGERS
- ▁FEAR
- ▁DECIDE
- ▁FEVER
- ▁FANS
- ▁STUDIO
- ▁LIAR
- ▁BREAKING
- ▁SLEPT
- ▁TAIL
- ▁BURGER
- ▁MOVIES
- ▁SMOKE
- ▁DANIEL
- ▁WAITER
- ▁PENCILS
- ▁CROSS
- ▁KOREA
- ▁GUARD
- ▁LEARNING
- ▁SUBWAY
- ▁CARS
- ▁SKIP
- ▁MIX
- ▁JEANS
- ▁LIST
- ▁POST
- ▁TRAVEL
- ▁BORROWED
- ▁AWESOME
- ▁RECORDER
- ▁FLOUR
- ▁COW
- ▁CAMPING
- ▁DRIVING
- ▁FELT
- ▁WINNER
- ▁CHARACTER
- ▁BALLOON
- ▁RIDDLE
- W
- FUL
- ▁NECKLACE
- ▁GLOVES
- ▁CHANGING
- ▁CRACKED
- ▁DROPPED
- ▁ROBERT
- ▁BAKERY
- ▁GRILL
- ▁INVITED
- ▁LAND
- ▁PORK
- ▁TELEPHONE
- ▁SKI
- ▁GUEST
- ▁AMBER
- ▁SHARP
- ▁KITE
- ▁DELI
- ▁MART
- ANNA
- ▁CIRCLE
- ▁FLYING
- ▁SHAKE
- ▁DANCER
- ▁POLICEMAN
- ▁DESSERT
- ▁SHOCK
- ▁BLOOD
- ▁MENU
- ▁BUMP
- ▁NOVEL
- ▁SKIN
- ▁SHOULDERS
- ▁MICHELLE
- ▁CROSSED
- ▁TICKETS
- ▁DRANK
- ▁OUTFIT
- ▁LAKE
- ▁PAINTER
- ▁ALIEN
- ▁RAINBOW
- ▁WORE
- ▁BAR
- ▁BROTHERS
- ▁DISH
- ▁SIMILAR
- ▁DISPLAY
- ▁GIRAFFE
- ▁FANCY
- ▁THIEF
- ▁HALLWAY
- ▁WAVE
- ▁CARROTS
- PE
- ▁ELDER
- ▁SOMEBODY
- ▁TRAFFIC
- ▁ACTOR
- ▁RUMORS
- ▁CHOSE
- ▁CAUS
- ▁DRESSED
- ▁ROSE
- ▁LYING
- ▁PANDA
- ▁PEAR
- ▁SUGGEST
- ▁DECISION
- ▁NOISES
- ▁TAKEN
- ▁GARLIC
- ▁CHINESE
- ▁ITCHY
- ▁SWORD
- ▁WAITED
- ▁NONE
- ▁SIZE
- ▁ACCEPT
- ▁CAPTAIN
- ▁GRAY
- ▁IDOL
- ▁SMALLER
- ▁USUAL
- ▁THOUSAND
- ▁LONELY
- ▁RETURNED
- ▁JENNY
- ▁PRACTICED
- ▁NEEDED
- ▁PAIN
- ▁RAP
- ▁THIN
- ▁EVERYWHERE
- ▁SUIT
- ▁BUSH
- ▁SON
- ▁COMPLIMENTS
- ▁FAILED
- ▁RUG
- ▁PAID
- ▁MANGO
- ▁BOYFRIEND
- ▁SCARF
- ELA
- ▁CROWD
- ▁ONLINE
- ▁GREW
- ▁SOCKS
- ▁SEAGULLS
- ▁USING
- ▁MELTED
- ▁OIL
- ▁ADULTS
- ▁KATE
- ▁WHISTLING
- ▁PRAY
- ▁POOR
- ▁SAUCE
- ▁PACKED
- ▁HATS
- ▁BUYING
- ▁AGO
- ▁SCIENCE
- ▁TUNNEL
- ▁DRESSING
- ▁MISSING
- ▁FESTIVAL
- ▁THURSDAY
- ▁PAIR
- ▁SITTING
- ▁SUITCASE
- ▁SHAPES
- ▁WILLY
- ▁HUGE
- ▁SHOUTED
- EVER
- ▁FAIR
- ▁TASTES
- ▁CAFETERIA
- ▁BINGO
- ▁BEGINS
- ▁DOLLAR
- ▁GRILLING
- ▁ALIVE
- ▁DINO
- ▁LIFT
- ▁TOP
- ION
- ▁STUFF
- ▁FROZEN
- ▁ACROSS
- ▁SEOUL
- ▁FRIES
- ▁TAUGHT
- ▁VIDEO
- ▁CREDIT
- ▁HAPPENS
- ▁RACE
- ▁TOUR
- ▁SPAGHETTI
- ▁SWING
- ▁INVITATION
- ▁COUNTRYSIDE
- ▁STAIRS
- ▁HIGHER
- ▁RANGER
- BAG
- ▁PULLED
- ▁LIPSTICK
- ▁VALLEY
- ▁NAP
- ▁FUTURE
- ▁SILENT
- ▁SPEAKER
- ▁GIVEN
- ▁JUMPING
- ▁AUTUMN
- ▁HOLDING
- ▁BOB
- ▁PLANNING
- ▁SUPPOSE
- ▁CLUES
- ▁ANSWERED
- ▁STICK
- ▁WASHED
- ▁CURLY
- ▁RUINED
- ▁SMILING
- ▁UNHAPPY
- ▁KIMBAP
- ▁CAUSE
- ▁CHUNKMONS
- ▁REPEAT
- STOOD
- ▁8
- ▁SHEEP
- ▁LOUDLY
- ▁SLIDE
- ▁KING
- ▁LIME
- ▁SKATING
- ▁SERVE
- ▁SAND
- ▁POWER
- ▁MUSICIANS
- ▁RESTROOM
- ▁SOMEDAY
- ▁GYM
- ▁GOD
- ▁COOKIE
- ▁NUMBERS
- ▁WARNING
- ▁CLASSMATE
- ▁COMPLAIN
- ▁LAUGHED
- ▁BEES
- ▁SAFELY
- ▁DESIGNER
- ▁ORANGES
- B
- ▁RETURNS
- ▁SPEAKING
- ▁GINA
- ▁MARTI
- ▁FEELINGS
- MAN
- ▁TULIP
- ▁BAZAAR
- ▁EMAIL
- ▁STRAWBERRY
- ▁PRESS
- ▁SALT
- ▁PHEW
- ▁COWS
- ▁ENTRANCE
- ▁LEAF
- ▁PAN
- ▁SOUR
- ▁DISEASE
- ▁OPENED
- ▁LUGGAGE
- ▁SWIMSUIT
- ▁PASSED
- ▁ALISON
- ▁SHOVELS
- ▁SENTENCES
- ▁GROUND
- ▁STAYING
- ▁SALES
- ▁JAM
- ▁WRAP
- ▁LATELY
- ▁SHRIMP
- ▁TWELVE
- ▁CHEAPER
- ▁CHECKING
- ▁SEAWEED
- ▁LO
- ▁TURTLES
- ▁DNN
- ▁WHE
- ▁ACT
- ▁LIZARD
- ▁SUCCEED
- ▁STRING
- ▁BASKET
- ▁HINT
- ▁VEGETABLES
- ▁FOOL
- ▁SHOT
- ▁ADULT
- ▁GREG
- ▁TASTY
- ▁FARM
- ▁LIPS
- ▁STARFISH
- ▁NAILS
- C
- ▁FR
- ▁TEARS
- ▁SUPERSTAR
- ▁CLEANS
- ▁HEAT
- ▁SILLY
- ▁WIG
- ▁BELLA
- WOKE
- ▁5
- ▁BOYS
- IVA
- ▁IMAGINE
- ▁LAUGHING
- ▁WASHING
- ▁FLAT
- ▁STICKERS
- ▁PRETTIER
- ▁KILL
- ▁FLIGHT
- ▁WOMEN
- ▁MOMMY
- ▁CAMP
- ▁MEMBERS
- ▁CUSTOMER
- ▁E
- ▁SINGERS
- 'ON'
- ▁CONTROL
- ▁TIGER
- ▁ZEBRA
- ▁IMPOSSIBLE
- ▁CONSOLE
- ▁CLUE
- ▁FOLD
- ▁BEE
- ▁ANDY
- ▁SEATS
- ▁POUND
- ▁SANG
- ▁DIAMOND
- ▁BATS
- ▁ARTIST
- ▁BABIES
- ▁GARAGE
- ▁INSTEAD
- ▁OLDFASHION
- ▁GIFTS
- ▁RODE
- BIG
- ▁MOUNTAINS
- ▁THUNDER
- ▁DONKEY
- ▁PIGEON
- ROOM
- ▁WORSE
- ▁HAMBURGERS
- ▁ERASER
- ▁TAMBOURINE
- ▁BREATH
- ▁ANNOYED
- ▁HALLOWEEN
- ▁KNOCK
- ▁STUPID
- ▁BANDAGE
- ▁PINEAPPLE
- OUT
- ▁SALTY
- ▁POTATO
- ▁MILES
- ▁COMMENT
- ▁TREATED
- ▁EAR
- ▁SLEDDING
- ▁VIOLET
- ▁BOTTLES
- ▁BRILLIANT
- ▁AUNTIE
- ▁SPEND
- ▁REACH
- ▁PAYING
- ▁APOLOGIZE
- ▁CORNER
- ▁FORGIVE
- ▁RELIEF
- ▁BEHAVE
- ▁DIE
- ▁PRETTIEST
- ▁H
- ▁HEN
- ▁POUR
- ▁NEEDLE
- ▁WORRIES
- ▁LARGER
- ▁CRAZY
- TYFIVE
- ▁DISCOUNT
- ▁HEADED
- ▁TWENTYFIVE
- ▁SOMETIME
- ▁REPORTER
- ▁FEED
- ▁KIMCHI
- ▁TENNIS
- ▁DOLPHIN
- ▁SUNGLASSES
- ▁THREW
- ▁COUNTRY
- ▁HUSBAND
- ▁JAPAN
- ▁TOMATOES
- ▁OK
- ▁POET
- ▁LUKE
- ▁LEND
- ▁LOWER
- ▁SHOVEL
- ▁AMERICA
- ▁BLOSSOMS
- OH
- K
- ▁SAFETY
- TALK
- ▁ASLEEP
- ▁MINER
- ▁PERIOD
- ▁STORYBOOK
- ▁BOWLS
- ▁DOUBT
- ▁MEMORY
- ▁SKINNY
- ▁EARTHQUAKE
- ▁2
- ▁BALLS
- ▁POTATOES
- ▁TROUSERS
- ▁WAR
- ▁FUR
- ▁RUMOR
- ▁CONGRATULATIONS
- ▁EASYGOING
- ▁NURSE
- ▁FLIES
- ▁GROWING
- ▁SMILES
- ▁CHOICE
- ▁ERASE
- ▁COMFORTABLE
- ▁GUIDE
- ▁PE
- ▁CLEVER
- ▁PEACE
- ▁AFTERSCHOOL
- ▁SOAP
- ▁POPCORN
- ▁SUNBLOCK
- ▁INVITE
- ▁AWAKE
- ▁FEMALE
- ▁HIKING
- ▁FOLLOWED
- ▁BUMPER
- ▁FILLED
- ▁HIPPO
- ▁COMEDIAN
- ▁SILK
- ▁COST
- IES
- ▁AWFUL
- ▁SIBLING
- ▁PIES
- ▁BURNING
- ▁CRASH
- ZIPPED
- ▁SPACE
- ▁LYRICS
- ▁HANDMADE
- ▁PER
- ▁ROUGH
- ▁THROWING
- ▁STATIONERY
- ▁WORM
- ▁PAGE
- ▁CLASSMATES
- ▁EXAM
- ▁FINAL
- ▁BLOW
- ▁CHINA
- U
- TH
- ▁BATTER
- ▁HONEY
- ▁MISTAKEN
- ▁DEPARTMENT
- GREAT
- ▁SHIRTS
- ▁COMPETITION
- ▁YOGURT
- MBER
- ▁DRINKS
- ▁WOLF
- ▁ISLAND
- ▁GROCER
- ▁SHARON
- ▁BREATHE
- ▁ANNOYING
- ▁LIED
- ▁SPA
- ▁KANGAROOS
- ▁ALIKE
- ▁PENGUIN
- ▁BRIGHTCOLORED
- ▁4
- ▁MESSAGES
- ▁INVENTION
- ▁WIPE
- BIRD
- ▁PRECIOUS
- ▁FLEW
- ▁CH
- ▁APART
- ▁MIDNIGHT
- ▁SPEN
- ▁SHELLS
- ▁GIN
- ▁NATURAL
- ▁THIRD
- ▁BADLY
- ▁PLATES
- ▁JOSHUA
- ▁MIDDLE
- ▁SWEAT
- ▁TOES
- ▁TIP
- ▁TEASE
- ▁BOOKSHOP
- ▁COUGHING
- ▁GUN
- ▁WASTE
- UMOR
- AR
- ▁SPREAD
- ▁GOAT
- ▁SPROUTS
- ▁BALLET
- ▁SNAKES
- ▁SCRATCHED
- ▁AMONG
- DANGER
- KGO
- NISH
- ▁FEE
- ▁JANE
- ▁TEMPER
- ▁CROWDED
- ▁BONO
- ▁CHEF
- ▁SAMPLE
- ▁LIONS
- ▁RULES
- ▁DREW
- ▁WORTH
- ▁MAGICIAN
- ▁GLUE
- ▁TOUGH
- ▁TOUCHE
- ▁TUNA
- ▁BAKE
- ▁LAUGHTER
- ▁HALF
- ▁HELMET
- ▁UH
- ▁COPIES
- ▁DIFFERENCE
- ▁FORK
- ▁STARTING
- ▁CRIES
- ▁SPROUT
- SNOW
- ▁SCARE
- ▁DRUMS
- ▁PHANTOPIA
- ▁VOUCHER
- ▁FARMER
- ▁CHANGES
- ▁SPILL
- AN
- ▁COMPLETELY
- ▁PRACTICES
- CHAIR
- ▁MISSE
- ▁RACHEL
- ▁SEEK
- EST
- ▁SISTERS
- ▁BLAME
- ▁PACK
- ▁BOIL
- ▁REQUEST
- ▁SH
- ▁WIRE
- ▁POT
- ▁ONION
- ▁CLOSER
- ▁MICE
- ▁SCRATCH
- ▁DUCKS
- THANK
- ▁RECEIVE
- ▁CABBAGE
- ▁SEEDS
- ▁JEJU
- ▁SUDDENLY
- RAY
- ▁KIWI
- ▁POWDER
- ERRY
- ▁MESSY
- ▁RID
- ▁CHAMPION
- ▁ARGUE
- ▁RECIPE
- ▁MICROPHONE
- ▁SCOLDED
- TRY
- ▁STRONGER
- ▁EXPECT
- ▁WEEKS
- AKER
- ▁JUMPED
- ▁RAINS
- ▁OREPHIA
- ▁PIGS
- LOSING
- ▁PRAYING
- ▁DUE
- ▁SOUTH
- ▁PUNCH
- ▁CREATIVE
- ▁FINISHING
- ▁HARMONI
- ▁CLOWN
- ▁SALON
- ▁SINK
- H
- ▁TOOL
- ▁ALARM
- VISION
- GY
- ▁FAIL
- ▁DRAWER
- ▁HAIRBAND
- ▁X
- ▁ARTICLES
- ▁DEEP
- ▁EARLIER
- ▁EXTRA
- ▁DOWNTOWN
- ▁LEFTHAND
- PTER
- ▁NOODLES
- ▁CONSIDER
- ▁ACCOUNT
- ▁DEER
- ▁SEAN
- RABBITS
- TY
- ▁CREAMS
- ▁LUCY
- ▁BOUN
- ▁HORNS
- EMENT
- ▁NOON
- ▁SMILED
- ▁NINETEEN
- ▁TURNS
- ▁MUFFLER
- ▁ROAR
- ▁HARDLY
- ▁SPELLED
- ▁SPOTS
- ▁SHORTS
- ▁JUMPS
- ▁RECENTLY
- ▁STOLEN
- ▁WITHIN
- ▁ENGLAND
- ▁PENDANT
- ▁MARY
- ▁AMUS
- ▁SERIOUSLY
- ▁FALLS
- ▁SPOONS
- ▁SAVED
- ▁STOLE
- ▁STUCK
- ▁G
- ▁DUMPLINGS
- ▁GERMAN
- ▁PLACES
- ▁OCARINA
- ▁QUEENSTEIN
- ▁BRANDON
- ▁DWARFS
- ▁TOFU
- ▁SPRAY
- PARD
- ▁CROSSING
- ▁PIGEONS
- ▁NOTICE
- CE
- LTY
- ▁BASEMENT
- ▁TABLET
- ▁COUPONS
- ▁PROGRAM
- ▁SOCK
- ▁GUI
- ▁NUT
- ▁OLIVE
- ▁PREFER
- ▁MUSHROOM
- ▁FIGHTING
- ▁DENERGY
- ▁STORAGE
- ▁POLITE
- IST
- ▁KICKBOARDS
- GAGE
- ▁DROWN
- ▁MANAGE
- ▁DRIVER
- P
- ▁WEEKENDS
- ▁SHOULDER
- ▁MUD
- ▁SEVENTY
- ALLY
- ▁POSTCARD
- ▁PIECES
- ▁HICCUPS
- ▁CHARACTERS
- ▁CLEANING
- ▁DIS
- ▁JG
- ▁JOSEPH
- ▁TITLE
- ▁CDS
- ▁BOSTON
- ▁BRACELET
- ▁PERMISSION
- ▁STEW
- ▁RAT
- ▁SKATE
- ▁CHEST
- ▁FOOT
- ▁CLIMB
- ▁AUDIENCE
- ▁DUFAR
- ▁GRANDPARENTS
- ▁FIT
- ▁TOUCHING
- ▁ELEPHANTS
- ▁TSHIRTS
- ▁APPOINTMENT
- ▁FOREVER
- ▁STARVING
- ▁LESSONS
- ▁COUPLE
- ▁TOTO
- ▁DRINKING
- ▁ARRIVE
- ▁GREE
- ▁SPOT
- ▁HELD
- ▁EARTH
- ▁DAUGHTER
- ▁SLICE
- ▁CASTLE
- ▁FEEDING
- ▁COVERED
- ▁FAM
- ▁AGE
- ▁AUSTIN
- ▁DEAR
- ▁NATI
- ▁CELEBRATE
- ▁MEATBALLS
- ▁STRETCH
- ▁SOLVE
- ▁USEFUL
- ▁SCAR
- DDING
- ▁ALLERG
- ▁RINGING
- ▁SAILING
- ▁SNOWING
- ▁LATEST
- ▁LIES
- ▁ACADEMIES
- ▁MUSICIAN
- ▁STA
- ▁FROGS
- ▁STOMP
- ▁KEYBOARD
- ▁FAIRY
- ▁CLAP
- ▁HAM
- ▁TOWARDS
- ▁RESERVATIONS
- ▁SHOUT
- SORRY
- ▁PUPPIES
- ▁WEAK
- ▁ORIGINAL
- ▁RESPECT
- ▁TABLES
- ▁COMPUTERS
- ▁TOWELS
- ▁CRAFTSMEN
- ▁ELE
- ▁REPAIRED
- ▁PRINT
- ▁BLOOM
- ▁WISELY
- ▁SCOLD
- ▁TWINKL
- ▁CANCEL
- ▁KIM
- ▁STAINED
- ▁LAP
- ▁DRI
- ▁SHARK
- ▁KANGAROO
- MENTARY
- THEY
- ▁DALLAS
- ▁SEESAW
- ▁WHISPER
- CAL
- ▁DWARF
- ▁SUNDAYS
- ALK
- ▁DOUBLE
- ▁SHAKING
- ▁PREPAR
- ▁YOYO
- ▁SKILLS
- ▁OCTOPUS
- ▁INSTRUMENTS
- ▁MAIL
- ▁ALIENS
- ▁JESSI
- ▁CHERRY
- ▁INCONVENIENCE
- ▁CERTAIN
- ▁BEEF
- CON
- 'OFF'
- ▁GATHERED
- ▁PRODUCTS
- CONVENIENCE
- ▁RESTAURANTS
- ▁MONKEYS
- ▁FIGURE
- ▁QUICK
- ▁GAIN
- ▁PENALTY
- ▁INLINE
- ▁INTRODUCE
- ▁OVERSLEPT
- ▁POL
- ▁HOWEVER
- ▁GORILLA
- ▁MEMBER
- ▁PLU
- ▁ANGER
- ▁AQUARIUM
- ▁GAS
- ELY
- ▁TIES
- ▁PUNISHED
- ▁CUCUMBERS
- ▁TINY
- ▁RISE
- ▁GHOSTS
- ▁WIFE
- MOND
- ▁RARE
- ▁BARN
- ▁SMELLY
- GAN
- ▁REASONS
- ▁BURNED
- ▁ANNOUNCE
- ▁CAPSULES
- ▁PICNIC
- ▁GLOVE
- FF
- RANCE
- ▁TREAT
- ▁JOG
- ▁BULLS
- ▁JJAKGUNG
- ▁PROVE
- ▁BAGS
- ▁RUDOLPH
- ▁MC
- ▁TRICKS
- RIOR
- ”
- ▁HAPPILY
- ▁REMIND
- ▁DIVER
- BE
- ▁HATES
- ▁SPOON
- ▁SIZES
- ▁THROAT
- ▁UN
- CRAFTS
- ▁BRIDGE
- ▁CONFUSED
- DONALD
- KEEPER
- ▁SIBLINGS
- ▁DENNIS
- ▁EMBARRASSED
- ▁PATRICK
- DWARFS
- ▁PREGNANT
- ▁VOTE
- ▁WHIPPED
- ▁10000
- ▁SUPPORT
- ▁TOOTH
- ▁STANDING
- ▁CLOSET
- ▁NEEDLES
- ▁SWEEP
- ▁RAISED
- ▁PEE
- ▁CONTACT
- ▁JEALOUS
- ▁SURVEY
- BOX
- ▁CROSSWALK
- ▁WALKING
- ▁SOP
- ▁SITE
- ▁OWE
- ▁FOURTEEN
- ▁PLANTING
- ▁CHANNELS
- ▁WIGGL
- ▁OURSELVES
- ▁SCENE
- ▁BAS
- ▁LETTUCE
- ▁NICKNAME
- ▁GRABB
- ▁ELEVATOR
- ▁COP
- ▁FALLING
- ▁DESERVE
- ▁FILM
- ▁SOPHOMORE
- ▁WOUND
- ▁PROTEST
- ▁PEACHES
- ▁CHILL
- ▁COURT
- ▁ROOF
- ▁CHARGE
- ▁FINGER
- ▁HANBOK
- ▁TAPDANCE
- ▁JAPANESE
- ▁MELON
- ▁BATTLE
- ▁LEAS
- ▁PARTS
- BATHING
- ▁CRUNCHY
- ▁PAUL
- ▁WHISTLE
- ▁CAKES
- ▁HEAL
- ▁SHELL
- ▁GUM
- ▁CARPENTER
- ▁HEAVILY
- ▁N
- ▁LEMONS
- ▁HARDER
- ▁ROW
- ▁STEAM
- ▁STUDIES
- ▁LOTTERY
- ▁BITTER
- ▁MOW
- ▁EATEN
- ▁SPORT
- ▁SHORTER
- ▁STEAL
- ▁GRADUATE
- ▁PUZZLE
- ▁CEREMONY
- ▁RAINCOAT
- ▁KISS
- HAP
- WAY
- ▁DEPART
- ▁LANGUAGE
- ▁BITTEN
- ▁BUSAN
- ▁L
- ▁TIGHT
- ▁BELOW
- ▁PERFECTLY
- KE
- ▁NATURE
- ▁MISUNDERST
- ▁CLOUD
- ▁DRAG
- ▁CARTOON
- ▁COCONUT
- ▁GOLF
- ▁THIRTEEN
- ▁DYING
- ▁PETE
- ▁MALL
- ▁BIN
- ICAL
- ▁ALIB
- ▁BREEZE
- ▁FRENCH
- ▁DATING
- ROW
- ▁WATERING
- ARD
- ▁DESERT
- ▁PRAISE
- ▁INTERNET
- ▁STRICT
- ▁MOSQUITOES
- TLE
- ▁SKILL
- ▁BEHAV
- ▁KTX
- ▁LONDON
- ▁TASTING
- ▁VAN
- ▁COUGHED
- ▁NICELY
- ▁HARM
- ▁BOOKSHELF
- ▁CRICKET
- ▁EDGE
- ▁PILLOW
- ▁RECTANGLE
- ▁STRESS
- ▁FOOTBALL
- ▁LAW
- ▁CHOPSTICKS
- WHAT
- ▁TWINS
- ▁AUSTRALIA
- ▁LAMB
- ▁MAYO
- ▁DESIGN
- ▁BLEW
- ▁GLORY
- ▁ROCKCLIMBING
- ▁DUTY
- ▁ENTERTAINMENT
- ▁THEMSELVES
- ▁YOG
- ▁BUCKET
- ▁BIRTH
- ▁FALSE
- ▁PATTERN
- ▁THREAD
- ▁SOLDIER
- ▁BATTERY
- ▁KNEES
- ▁HEADS
- ▁DELIVERED
- ROUTE
- ▁SIMPLE
- ▁WATERFALL
- ▁SWITCH
- ▁EFFORT
- ▁UNUSUAL
- ▁SLIPPED
- ▁REG
- ▁SUITS
- ▁CHANNEL
- ▁MINI
- ▁PLASTIC
- ▁RECOMMEND
- ▁RUBBER
- ▁THANKFUL
- ▁ROLL
- ▁SOLV
- ▁CLAPS
- ▁BUD
- ▁CINEMA
- ▁SHELF
- ▁LOSS
- ▁WOMANS
- ▁CANADA
- ▁EXPRESS
- ▁SHARING
- ▁LOOSEN
- ▁CHOCO
- ▁RUNNY
- ▁REPL
- ▁BOWL
- ▁FULLY
- ▁SOMEHOW
- ▁UNIQUE
- ▁CARES
- ▁NOODLE
- ▁JETLAG
- ▁LAPTOP
- ▁TOOTHPASTE
- ▁JON
- ▁AIRPORT
- ▁JOO
- YER
- ▁CAP
- ▁HOLLY
- ▁JOHNSON
- ▁ZERO
- ▁LEADER
- ▁OX
- ▁SQUEEZE
- PY
- GET
- ▁FIN
- ▁ZIP
- ▁SEPTEMBER
- ▁TEMPERATURE
- THIRTY
- ▁GOODLOOKING
- ▁GUAR
- ANTEE
- ▁LOG
- ▁WILD
- ▁BOOTH
- ▁PEPPERS
- ▁FORGOTTEN
- BALL
- ▁AB
- CALORIE
- ▁POLICY
- ICO
- ▁INCLUDED
- ▁LIGHTEN
- ▁BLAMED
- ▁LONGTIME
- OOD
- ▁JEAN
- ▁DECK
- ▁MANNER
- ALTH
- ▁PERSONALLY
- TRUCK
- PT
- ▁GUT
- ▁CRASHED
- ▁FLO
- ▁REACT
- ▁ABSENT
- KYO
- ▁BLUSH
- ▁DONATE
- DOCK
- ▁COMPLAINING
- ▁DESCRI
- ▁GEORG
- ▁RECOVER
- ▁WALNUT
- ▁LUNG
- ▁BUDDY
- ENSE
- ▁PASSES
- ▁PLUM
- HALF
- ▁SE
- ▁TURTLE
- ▁FRANC
- ▁KOALA
- ▁TURKEY
- ▁CARPET
- ▁ANYWHERE
- ▁R
- ▁SKIING
- ▁FOCUS
- ▁HARV
- ▁JANUARY
- ▁PRESIDENT
- ▁TWENTYONE
- ▁WRESTLE
- ▁CANCER
- ▁CHEATING
- ▁HOMEMADE
- ▁WEEKDAY
- ▁K
- THER
- ▁DREAMS
- ▁APPRECIATE
- ▁BRAIN
- ▁SAUSAGES
- SOMETHING
- GAR
- ▁SMOOTH
- ▁SLIM
- ▁FENCE
- JURY
- LIES
- ▁SPIDERS
- EADLINE
- EVEREST
- ▁SCORES
- ▁JOKING
- ▁REJECT
- ▁STEPMOTHER
- ▁CRIM
- ▁DIGGING
- ▁QUEEN
- ▁MALE
- ▁SNORES
- ▁EXPLAINED
- ▁HOUSEWORK
- ▁BEDTIME
- BEAT
- WORKING
- ▁SMELLING
- ▁GRAPE
- ▁INSTRUCTIONS
- ▁SUNSCREEN
- ▁WORKDAY
- ▁HOLES
- ATER
- UP
- RIDA
- ▁VINE
- ▁HERSELF
- ▁NIGHTMARE
- ▁SNAP
- ▁INSU
- ▁BURNS
- GIV
- ▁MOUNT
- ▁NEGATIVE
- ▁ADVANTAGE
- ▁DIFFICULTIES
- ▁7
- ▁REMAINS
- CHECK
- ▁TRAVELING
- ▁IMAGIN
- G
- ▁BENNY
- ▁JOHN
- ▁ATHLET
- ▁COOPE
- ▁DICTIONARY
- ▁HAPPINESS
- ▁RAPPER
- ▁SLIPPERY
- ▁SUNRISE
- ▁TAPDANCING
- ORABLE
- ▁NOTICING
- ▁WAITLIST
- ▁CUCUMBER
- FTH
- ▁GUESTS
- ▁COLLEGE
- ▁STOCK
- HH
- ▁TALE
- POP
- ▁MEXIC
- ▁FREEZER
- ▁REFUSE
- ▁SWIMMER
- ▁THOUGHTFUL
- DIVING
- WORKED
- ▁COURAGE
- ▁ERRANDS
- ▁LISTENED
- ▁GRUM
- ▁WEB
- ▁TWEL
- GED
- ▁CABIN
- ▁REHEARSAL
- ▁SKETCHBOOK
- ▁DAYCARE
- ▁PARTIES
- OBBY
- ▁SEAL
- WHERE
- ▁ROSES
- INE
- ▁ACCIDENT
- ▁PERSONALITY
- ▁SPECIFIC
- ▁RINGS
- ▁BLOOMED
- ▁AW
- YARD
- ▁ENTERED
- ▁BELLY
- ▁FUNNIER
- ▁NARROWMINDED
- USY
- ▁JOURNAL
- ▁JER
- ▁PRICES
- BREAK
- ▁BILLS
- SOLUT
- ▁11
- ▁REFILL
- ▁BAKED
- ▁ALPHABET
- CONNECTED
- ▁GOATS
- ▁WASHE
- ▁CHOP
- PHLE
- ▁NONSENSE
- ▁WADDL
- ▁PETS
- ▁DECORATE
- LUSH
- ▁FORGETTING
- ▁EMILY
- ▁BICYCLES
- ▁SHOWN
- ▁BUCK
- ▁BAIT
- ▁100
- ▁MOVER
- ▁HEL
- ▁WINNING
- ▁ROCKET
- ▁FANG
- ▁CA
- ▁DEPRESS
- ▁BEAUTY
- ▁DAILY
- ▁ENGINEER
- ▁MUFFIN
- ▁WRITER
- ▁OPINIONS
- ▁TRACKS
- ▁PAUSE
- ▁PUZZLED
- URE
- SEY
- ▁WRAPS
- ▁SOCIAL
- ▁GRADES
- ▁WARMLY
- ▁YOYOS
- ▁CHEW
- ▁BULGOGI
- ▁BARKING
- ▁SENTENCE
- ▁THOUGH
- ▁POO
- ALIAN
- ▁EVE
- ICED
- ▁RAIS
- ▁DISTURB
- ▁ITSELF
- ▁ORIGAMI
- ▁TISSUE
- ▁JOHNNY
- ▁BURN
- ▁COOKS
- ▁CANDLE
- ▁OBVIOUS
- ▁SANDPAPER
- ▁SUPPLIES
- ▁CHEWY
- ATIONS
- ▁FLAVOR
- ▁KIWIS
- ▁MASTER
- ▁YELLING
- ▁CUPS
- ▁BL
- LAINE
- ▁STIMULAT
- ▁TIRES
- ▁PRETEND
- ▁CLEANED
- ▁RUSSIA
- ▁FRECKLES
- ▁FART
- ▁CHEETAH
- ▁RUDE
- ▁TRAINS
- ▁LOTTE
- ▁PAGES
- ▁POSTCARDS
- ▁KEYS
- ME
- ▁BOOKSTORE
- ▁HOST
- ▁SHORTCUT
- ▁SHOOTS
- ▁OPINION
- ▁APRON
- ▁COPIED
- LLOWED
- ▁STICKY
- ▁PREPARE
- ▁HEADQUARTERS
- ▁REPAIRS
- ▁WHALE
- ▁POOP
- ▁RESEMBLE
- ▁SHARE
- ▁LOLL
- ▁EXERCISES
- ▁PROGRAMS
- ▁BLINK
- ▁FLAG
- ▁LAY
- ▁FASTEST
- ▁SNEEZE
- ▁ENDED
- J
- ▁MARKER
- HER
- ▁ASSISTANT
- ▁CURRY
- ▁PURSE
- ▁SLIPPERS
- ▁UNDERSTANDING
- ▁PIT
- ▁INDOOR
- ▁CROWN
- ▁CURIOUS
- ▁SYSTEM
- ▁CABLE
- ▁MOSQUITO
- ▁PHARMACY
- ▁EVERLAND
- ▁WINDOWS
- ▁BOOGER
- ▁TIRING
- ▁PAPERS
- ▁PEANUT
- ▁PARDON
- ▁AH
- ▁FOX
- ▁RESELL
- ▁RESULT
- ▁TWIST
- ▁SLED
- ▁TALLEST
- ▁RIBBONS
- ▁RECEI
- ▁SQUIRREL
- ▁CUTLET
- ▁HEIGHT
- ▁HURTING
- ▁TRAP
- ▁WRAPPER
- ITED
- ▁FRIGHTENED
- ▁PATIENT
- ▁CANCELED
- ▁SHELVE
- ▁NET
- OOPS
- ▁MESS
- ▁MERRY
- ▁PLATE
- ▁COMPLAINT
- ▁SITUATION
- ▁PARIS
- ▁STRAW
- ▁DIVIDE
- ▁GOAL
- ▁SHRIMPS
- X
- SPECIAL
- GOTTEN
- F
- ▁COLLECTED
- ▁AFFORD
- ▁HUNG
- ▁CHAMBER
- ▁AIRPLANE
- ▁CHA
- ▁WALLS
- ▁REGULAR
- ▁EXPERIENCE
- ▁PILOT
- ▁250
- ▁LEMONADE
- ▁FURTHER
- ▁RAC
- IN
- ▁SWALLOW
- ▁CLOSING
- ▁CLASSROOMS
- ACK
- ▁RENT
- ▁ADS
- ▁TENTH
- ▁FRY
- ▁HOTDOG
- ▁ANGEL
- ▁PEACH
- ▁HIDDEN
- ▁GOOSE
- ▁SMALLEST
- ▁ROCKS
- ▁COOKED
- ▁CORN
- ▁SIGNS
- ▁ANXIOUS
- ▁LIGHTNING
- ▁SNOWBALL
- ▁BESIDE
- ▁ANTS
- ▁ALLOWANCE
- ▁COUNTRIES
- ▁POUCH
- ▁SLIP
- ▁POEM
- ▁RAMEN
- ▁ROLLING
- ▁PATIENTS
- ▁SCREEN
- ▁PRESENTATION
- ▁CAST
- ▁FLUTE
- ▁HU
- ▁ZEBRAS
- ▁COMPARE
- ▁WIDE
- ▁FORSYTHIA
- ▁SENIOR
- ▁DONATED
- ▁FACTS
- RD
- ▁FOG
- ▁ROLE
- ▁PEARS
- ▁BUTTONS
- COME
- ▁HAIRCUT
- ONDE
- ▁ENV
- ▁CHASED
- THE
- '4'
- ▁TRACK
- ▁STRANGER
- ASOL
- ▁CHIN
- ▁PUBLI
- ▁DUN
- ▁JUNE
- ▁20
- ▁DOUGHNUT
- ▁DADDY
- PORT
- ▁EMBARRASSING
- ▁UNCOMFORTABLE
- ▁FOREHEAD
- ▁RELATIVES
- ▁DOODLE
- ▁GENTLEMAN
- ▁TAPE
- ▁BANKER
- ▁ACTRESS
- ▁SORT
- ▁REDESIGN
- ▁GRADERS
- ▁KICKING
- ▁LA
- UK
- ▁BARBECUING
- ▁BULLY
- RATE
- ▁JUN
- ▁KOREANS
- ▁CORPORATION
- ▁HEAVIE
- ▁IMPROVE
- ▁OCEAN
- ▁LG
- ▁LAYER
- ▁BRIGHTLY
- ▁CRABS
- ▁PAR
- ▁BLANK
- ▁CALENDAR
- ▁CROCODILE
- ▁SALARY
- ▁CHUSEOK
- ▁CUTEST
- ▁NOR
- ▁MYSTER
- ▁BEND
- ▁INCLUDE
- ▁EXCELLENT
- ▁PAINFUL
- ▁SKEWERS
- ▁CHEERING
- SIZE
- BELT
- RCH
- ▁PLEASANT
- ▁PATH
- ▁QUALITY
- ▁STINGS
- ▁REPAIRING
- ▁DELAY
- ▁RIDES
- ▁ELSA
- ▁SECURITY
- ▁TWENTIETH
- ▁PC
- AH
- ▁NOTES
- RAL
- ▁NORMAL
- ▁DIRECT
- ▁CENT
- ▁APOLOGY
- ▁GARBAGE
- ▁GEE
- ▁WATCHES
- ▁SCISSOR
- ▁CULT
- ▁ECONOMY
- ▁SEASHELL
- ▁HA
- ▁HORSES
- ▁WHEELS
- BYE
- ▁HABIT
- ▁VI
- OOKIE
- ▁BAKING
- ▁CHERISH
- ▁JESUS
- ▁KLEA
- ▁PARTICIPATE
- ▁NICER
- ▁LISTING
- ▁SUPP
- IELD
- ▁CRISPY
- ▁EYESIGHT
- ▁TWITCH
- ▁WORST
- ▁GREETING
- ▁DRYER
- ▁LINES
- ▁DEPRESSED
- RENT
- ▁ROLLS
- LAND
- ▁DOCUMENT
- ▁COCKROACH
- ▁TAX
- ▁LIBER
- ▁FRIGHT
- ▁GARDENVIEW
- ▁JAR
- ▁ONESELF
- ▁PELICAN
- ▁RUSH
- ▁BAKER
- ▁EXPLODED
- ▁CARNATIONS
- ▁BUBBLES
- ▁BREAKS
- ▁EUROPE
- ▁EXCHANGE
- ▁SMASH
- ▁TORONTO
- ▁CEO
- ▁BLEEDING
- ▁IMAGINED
- ▁KIL
- ▁POU
- ▁TAB
- ▁CRUS
- OGRAMS
- ▁ALASKA
- ▁FROWNED
- MAIL
- TWINKL
- ▁SINGLE
- ▁INVENT
- ▁ROD
- ▁EMERGENCY
- PORTER
- ▁COMB
- ▁HUG
- TI
- '...'
- SMITH
- ▁AVOID
- ▁JJAKKUNG
- ▁MATERIALS
- ▁LOSES
- ▁LU
- INA
- FREE
- ▁SERV
- ▁FLU
- ▁REEL
- ▁BACKPACK
- ▁REPRINT
- ▁SIXTEEN
- ▁ZENA
- ROL
- ▁AWARD
- ▁TENK
- ▁NETWORK
- ▁WORKER
- ▁REDUCE
- GUE
- ▁PROTECT
- ▁CONCERN
- ▁CRIMINAL
- ▁FIREFIGHTER
- ▁INCHEON
- ▁SUWON
- ▁VIEWER
- OVER
- ▁ELEVATORS
- OR
- ▁IMPRESSED
- ▁SHAME
- ▁STRAP
- ▁YIELD
- ▁WARNED
- ▁HANDOUT
- ▁LUNCHTIME
- URY
- IED
- AY
- WIFE
- GUN
- ▁ISSUE
- RRIE
- ▁SANDCASTLE
- ▁FIGURES
- ▁LOV
- ▁POKE
- ▁FREESTYLE
- ▁CHAIN
- ▁EVERYDAY
- OK
- ALY
- ▁RATING
- ▁SPIT
- ▁SAIL
- ▁AMBULANCE
- ▁ENORMOUS
- ▁SELFCONT
- ▁MEMORIZED
- ▁GIRAFFES
- ▁SNOWS
- ▁PLANTS
- ▁LEAD
- ▁EXHIBITION
- ▁FOUGHT
- ▁MARBLE
- 'YES'
- ▁PICKE
- ▁WRONGLY
- ▁HURR
- ▁CONVERSATION
- ▁DETAIL
- ▁WORRYING
- ▁SAVING
- ▁TU
- ▁SECRETLY
- AWAY
- ▁GROWS
- ▁CONTRA
- ▁SCRAMBLE
- BES
- ▁PROMISES
- ▁CHAIRS
- ▁GOGGLES
- ▁OTHERWISE
- ▁VICTOR
- ▁THORNS
- ▁WORTHWHILE
- ▁HIPPOS
- ▁TRICK
- ▁OBSERVATORY
- ▁SHAMPOO
- ▁COKE
- ▁DRAMA
- ▁DELAYED
- ▁GUTS
- ▁AZALEA
- ▁WRAPP
- TIE
- HEAD
- ▁BIGGEST
- ▁ENEMIES
- ▁PUMPKIN
- ▁DOCUMENTARY
- ▁ATOPY
- ▁COUGH
- ▁TOUCHED
- ▁AWARDS
- EWER
- VER
- ▁BEARS
- ▁CACTUS
- ▁LOCK
- ▁LIT
- ▁SKETCH
- ZEN
- ▁DRAGG
- ▁SQUEEZED
- ▁SCOT
- SHY
- ▁CALCULAT
- ▁APPEARED
- ▁RAINED
- ▁WINGS
- ▁CLOTH
- ▁DIG
- ▁DONGSENG
- ▁SPONGE
- ▁STUBBORN
- ▁WAIST
- ▁FLE
- ▁TAG
- CH
- ▁CR
- ▁UMBRELLAS
- ▁TOOTHBRUSH
- ▁POCKETS
- ▁PAJAMA
- ▁HALLA
- ▁GATHER
- ▁BOSS
- ▁DETERGENT
- ▁DOCUMENTS
- ▁GENEROUS
- ▁TOTAL
- ▁CURTAIN
- ▁PUDD
- ▁THICK
- NSIBLE
- ▁HOLIDAYS
- ▁TICKLES
- FLAVORED
- ▁COVID
- ▁GIFTWRAP
- ▁BLINKING
- ▁JUNG
- HOK
- LEANING
- ▁IDOLS
- ▁DRO
- ▁FOUNTAIN
- ▁PHYSIC
- ▁PRESCRIPTION
- ▁LATTE
- ▁TONGUE
- ▁NA
- WORLD
- ▁SURGERY
- ADLINE
- ▁STUFFY
- ▁WAFFLES
- ▁15
- ▁LOGO
- ▁SHORTCUTS
- ▁RESPECTED
- ▁INVENTIONS
- ▁ARTISTS
- RAFFI
- ▁FOSSIL
- ▁GOLDCREST
- ▁MALTESE
- UGGING
- ▁BUCKWHEAT
- ▁PROFESS
- ▁SQUID
- ▁CORRECTION
- IT
- LOOKING
- ▁GENIUS
- ▁WHALES
- ▁OPPA
- ▁DONKEYS
- ▁ELECTRIC
- ▁FAKE
- ▁JUNIOR
- ▁MEDAL
- ▁SONGPYEON
- ▁MO
- ▁LOCKED
- ▁MEMORIZE
- ▁DIZZY
- ▁CAMELS
- ▁Y
- ▁CARING
- ▁PERFORMANCE
- ▁ERRAND
- ▁STRIPE
- ▁SIL
- ▁REDESIGNED
- ▁TIPS
- SCRIPT
- ▁BISCUIT
- ▁TORN
- ▁BRUSHE
- ▁STREETS
- ▁RELIEVED
- ▁HOPS
- ESSER
- ▁INSTRUMENT
- ▁ADVANCE
- ▁GESTURE
- ▁MUGWORT
- ▁PROMOT
- ▁PIN
- ▁SHAD
- IONAL
- '72'
- ▁HEAVEN
- ▁SLOPE
- ▁HAIRDR
- YOU
- ▁OWNERS
- ▁PLANS
- ▁SUNFLOWERS
- ▁CHIMNEY
- ▁HIPHOP
- ▁FOURTH
- ▁C
- ▁COUNTS
- ▁BARK
- SCOPE
- ▁ATOPIC
- ▁DEATH
- ▁FORMALLY
- ▁TWIN
- ▁QUIETLY
- ▁TEAS
- ▁MIN
- ▁CE
- ▁DEPENDS
- ▁TRANSFERRED
- ▁HANDY
- ▁CLEARLY
- CHOCO
- ▁HOTDOGS
- ▁FROWN
- ▁RUB
- ▁PERFORM
- ▁ATTRACT
- ▁DUST
- ▁REVIEW
- ▁SIGNBOARD
- ▁ENDURE
- ▁RIDD
- CKED
- ▁CIRCLES
- ▁AIRPLANES
- ▁MI
- GING
- Q
- ▁YURI
- ▁30
- ▁OFFICERS
- ▁ALMONDS
- ▁SOLVED
- ▁WEREN
- ▁ALBUM
- ▁UNDERGROUND
- ▁WRINKLES
- IL
- ▁TALES
- SOKCHO
- ▁GROCERIES
- ▁RECEIV
- ▁BARE
- ▁PEEL
- ▁COCKROACHES
- ▁DEEPLY
- ▁STATIONS
- ▁DANCED
- ▁CHUBBY
- ▁SATURDAYS
- ▁WING
- ▁CRAFTSMAN
- ▁OCCASION
- ▁WINE
- ▁TELE
- ▁BLUETOOTH
- ▁DISAPPEARED
- ▁SUBM
- ▁FARTED
- ▁PREPARED
- LIST
- ▁CONDITION
- ▁PORTRAIT
- '23'
- ▁POINTS
- ▁TAMBOURINES
- ▁TEND
- ▁SELFISH
- ▁SUBJECT
- RUPTE
- ▁LICKING
- ▁WATERMELONS
- ▁DIES
- ▁BLOWING
- ▁SOIL
- NIFE
- ▁BLAND
- ▁RECYCLED
- ▁SIXTY
- ▁LENGTH
- ILING
- ▁SURVIVED
- ▁HABITS
- WANT
- ▁GRAND
- ▁SAVORY
- ▁APPLAUSE
- ▁APPLY
- ▁MEANER
- ▁DISEASES
- ▁FRUSTRATING
- ▁NOTIFICATION
- ▁CHEOMSEONGDAE
- ▁BADGE
- ▁ABOARD
- ▁DISNEYLAND
- ▁LEE
- ▁SHARPEN
- ▁KETTLES
- ▁HERESY
- ▁CRAM
- ▁BRONZE
- ▁HARSH
- ▁EBS
- ▁GREY
- ▁POSE
- ▁PICKLES
- ▁LEN
- ▁TIGERS
- ARY
- ▁CLAR
- ▁EDUCATION
- ▁NEIGH
- ▁ADDITION
- ▁REASONABLE
- ▁DUMPING
- ▁SPACES
- ▁LIGHTER
- ▁SPELLING
- Z
- ▁CATCHING
- ▁LEVEL
- ▁UPSTAIRS
- ▁RINK
- ▁HANDLE
- AVING
- ▁BOWED
- ▁BEAUTIFULLY
- ▁FARTS
- ▁BOLT
- ▁FAMILIAR
- BBLE
- DO
- ▁FILE
- ▁TREATMENT
- ▁PASTOR
- ▁EEK
- ▁BLOOMING
- CIAL
- TRAINED
- ▁APPEAR
- ▁KNEE
- ▁WHEEL
- RIAN
- ▁ATTEND
- ▁CONFESS
- ▁DVD
- ▁WITNESS
- ▁BATMAN
- ID
- ▁BANGS
- ▁YARD
- ▁LOTION
- ▁RECYCLE
- ▁PRI
- ▁BURDEN
- ▁SCRA
- ▁VEGETA
- ▁TOENAILS
- SUALLY
- ▁YAM
- FORD
- ▁FORMAL
- ▁POK
- ▁FROZE
- ▁MULTIPLICATION
- ▁SEJONG
- ▁TRIES
- ▁SUNSHINE
- ▁HERBS
- ▁STRIPES
- ▁CLIMBING
- ▁SKIPP
- FFE
- ▁DAMAGE
- ▁RIDICULOUS
- ▁QUACK
- ▁PINNOCHIO
- SIDE
- ▁STANDARD
- ▁TRADITION
- GIANT
- ▁YELL
- ▁SUPER
- ▁OVERREACT
- ▁PERFUME
- ▁UNDERCOOK
- BEC
- ▁MAPS
- ▁PARTNERS
- ▁SPINACH
- ▁TTEOKGUK
- ▁JAJANGMYEON
- ▁DIRECTLY
- VATE
- STEE
- ▁MOUSES
- ▁SNOWED
- ▁IGNORE
- GIFT
- ▁LOCKER
- ▁SURVIV
- ▁P
- BBLES
- DAIRY
- ▁TOOLS
- STAR
- LING
- ▁BB
- ▁ACCESSORIES
- ▁NINTENDO
- ▁BIBIMBAP
- ▁DERMATITIS
- ▁ANNOUNCED
- ▁LICK
- ▁AZALEAS
- ▁PEPPER
- VAS
- ▁BODIES
- ▁EXPAND
- PED
- FLOWING
- ▁MIXED
- ▁GROUP
- ▁SAUSAGE
- ▁CEREAL
- ▁EASIEST
- ▁OVERSLEEP
- ▁SATISF
- ▁150
- ▁BAY
- ▁DIP
- UN
- AK
- ▁COINS
- ▁SURPRISES
- ▁WAK
- OL
- ▁EVILDOING
- ▁EYEBROWS
- ▁HEADBAND
- ▁KETCHUP
- ▁PROPERLY
- ▁STRAWBERRIES
- ▁UNFORTUNATE
- ITY
- LIKE
- ONG
- ▁WISHES
- ▁CONSTRUCTION
- ▁RESEARCH
- ▁RIPPED
- ▁FOREIGNERS
- ▁SANDALS
- ▁GOLDEN
- ▁PERFORMANCES
- ▁STEALING
- HA
- ▁SPARE
- ▁KPOP
- ▁LEASH
- ▁TIGHTLY
- CM
- ▁COMME
- ▁500
- ▁ANCHOVIES
- ▁BANKBOOK
- ▁COVIDNINETEEN
- ▁DEFINIT
- ▁UPRIGHT
- ▁MISSION
- BAL
- PHONES
- HO
- ▁GENERAL
- ▁OVEN
- ▁MARCH
- V
- HU
- ▁GROWN
- ▁BROADCAST
- ▁GANGWONDO
- ▁REFRESHING
- ▁DICE
- ▁RACK
- ▁PERM
- ▁SUITCASES
- ▁16
- ▁ENVELOPE
- ▁HOOKED
- ▁ROOT
- ▁TEXT
- ▁CAGE
- GO
- ▁MUS
- ▁DOUGHNUTS
- ▁WASTING
- ▁BETIAN
- ▁PRESENTING
- ▁BRUISE
- ▁ALOUD
- ▁AUDITORIUM
- ▁BTS
- PLE
- RAISED
- MOTION
- ▁GENTLE
- ONIA
- ▁EASIER
- ▁FONDUE
- ▁SEASICK
- ▁VR
- ▁DOLPHINS
- ▁MATCHES
- UR
- ACHE
- ▁CICADAS
- ▁LEAN
- ▁REPORTS
- YING
- ▁CLOUDS
- ▁WOLVES
- ▁HEEL
- ▁FRESHMAN
- ▁SCREAMED
- ▁RELATIVE
- ARIN
- ▁BUR
- ▁PASTE
- ▁FRIENDLY
- ABLE
- ▁VISITING
- ▁INVIT
- ▁LOUDSPEAKERS
- ▁NNN
- ▁OINTMENT
- ▁SWAN
- CLES
- ▁GARDENING
- ▁HICCUP
- IM
- '0'
- ND
- BA
- ▁JULY
- ▁SEMESTER
- ▁SUSHI
- ▁UNIVERSE
- ▁TOSUN
- ▁PILLS
- ▁TAN
- ▁NEAT
- ▁FEATHER
- ▁ANNEX
- ▁PENGO
- ▁SICKNESS
- ▁CANDLES
- LO
- ▁SCRUB
- ▁SHOOT
- ▁TH
- ▁CRACK
- PLAIN
- ▁FRIDGE
- ▁ANSWERING
- ▁INDOORS
- ▁APOLOGIZED
- ▁COMEDIANS
- ▁WOR
- ▁SPIN
- ▁DRACULA
- ▁DRAGONFLIES
- ▁EXTINGUISHER
- ▁GRADUATION
- ▁LADIES
- ▁EX
- ▁PLANNED
- ▁50
- ▁MILLIONS
- ▁TANGERINES
- ▁DRAWN
- ▁CLEANER
- ▁DECORATIONS
- ▁SPI
- ▁VARI
- ▁DRAGONFLY
- ▁SCENT
- ▁GAYAGEUM
- ▁CL
- ▁MONTHS
- ▁PAJAMAS
- ▁RESTING
- ISE
- ▁BADGES
- WORK
- KY
- ▁ADORES
- ▁COLA
- ▁MOTOR
- ▁PRODUCE
- ▁THOROUGHLY
- ▁VOWELS
- ▁COMMON
- PING
- ▁SUNFLOWER
- ▁FOLDING
- ▁DECORAT
- '8'
- ▁SCREAM
- ▁CONNECT
- ▁AUGUST
- ▁PURPOSE
- ▁PIAN
- ▁CHIMNEYS
- ▁MONDAYS
- JU
- ▁BEETLE
- ▁PEED
- ▁INTEREST
- ▁BAN
- ▁SNOR
- ▁MA
- ▁SEW
- ▁COIN
- ▁HAN
- ▁ALPHABETS
- ▁TONKATSU
- ▁HOPEFULLY
- ▁ICECREAM
- ▁REGULARLY
- ▁GALBI
- ▁CHAS
- ▁REALIZE
- ▁WORKERS
- ▁BOATS
- ▁INTERRUPT
- ▁SUBTRACT
- ▁ORGANIZING
- ▁HISTORIC
- ▁POTTER
- ATION
- ▁CHARGER
- ▁BAL
- ▁SUNLIGHT
- ▁DYE
- ▁SHOELACES
- ▁EVENLY
- RY
- '30'
- BIKE
- ▁CRAWL
- ▁CHOOS
- ▁ROBBINS
- ▁SHOOK
- ▁SPLASH
- ASKIN
- ▁UNTIE
- YMP
- ▁STING
- IOUS
- ▁PA
- ▁CAROLS
- ▁SUDDEN
- ▁MACKEREL
- ▁NOSEBLEED
- ▁SCREW
- ▁HANOK
- TOMS
- ▁STRA
- DAY
- ▁RIBBON
- MILKY
- BEAN
- ▁TOMATO
- ▁NATIONAL
- ▁SPRITE
- ▁PANIX
- ▁WISE
- ZED
- ▁CHEWING
- ▁FOOTS
- ▁SHAKES
- ADA
- 'NO'
- ▁DIFFERENTLY
- SLEEVE
- ▁930
- ▁GYEONGJU
- ▁RAPUNZEL
- ▁ROMANTIC
- ▁FARTHER
- ▁CAPE
- IER
- ETY
- ▁HARDEST
- ▁TURNING
- ▁3000
- GENEROUS
- ▁BOO
- ▁ATTENTION
- ▁DWARVES
- ▁HAKNYEON
- ▁OUTDOOR
- ▁RESORT
- ▁SWOLLEN
- ▁PINCH
- ▁PURE
- STER
- ▁GRAB
- ▁BIO
- ▁HURRICANE
- ▁JUDGE
- ▁LANE
- ▁OINK
- ▁SPRAINED
- ▁THIEVES
- ▁TRAPPED
- BIL
- ▁RANCH
- ▁TWENTYTH
- ▁ANNE
- OLD
- NIGHT
- ▁HEIGHTS
- ▁BRICK
- ▁GRATEFUL
- ▁VITAMIN
- ▁HAMSTER
- ▁USELESS
- ▁INVENTOR
- ▁ULSAN
- ▁PRETENDING
- ▁PANDAS
- GGING
- UL
- AG
- COMING
- ▁HUNT
- ▁REMOVE
- ▁OCTOBER
- ▁SEPARATE
- ▁YAWN
- ▁PALE
- ▁UM
- ▁FLOATING
- ▁CO
- HAVE
- ▁SNOWY
- ▁SHOELACE
- GRAPHY
- ▁MELT
- ▁FISHBONE
- UG
- ▁CHIL
- ▁POOPED
- ▁YUT
- ▁PILL
- '0000'
- ▁SURVIVE
- ▁EXAMIN
- ▁TRU
- ▁BACKGROUND
- ▁BEGINNING
- ▁MACARONS
- ▁SURFING
- ▁VERANDA
- ▁ASSEMBLE
- ▁HANGUL
- ▁REACTION
- ▁DAUGHTERS
- MENT
- QUET
- RMALLY
- ANG
- ▁LID
- ▁RESERVATION
- SOON
- ▁FLIP
- CAN
- ▁JUICY
- ▁KINGDOM
- ▁SOCIETY
- ▁TADPOLE
- ▁JAMSIL
- ▁WI
- ▁GRADUATED
- ▁PRE
- ▁SCRATCHING
- ▁PO
- ▁APPEARS
- ILY
- FAT
- FOOD
- ▁DISAPPEAR
- ▁FAINT
- ▁FLOAT
- ▁RUBB
- ▁TRANSFER
- ▁COMFORT
- ▁BALLERINA
- ▁DESCRIPTION
- ▁GENTLY
- ▁HAPPIER
- ▁RINGTONE
- ▁ARGUING
- ▁CONDITIONER
- PM
- IET
- CU
- ▁EARTHQUAKES
- ▁CHICK
- ▁TR
- ▁TYPHOON
- ▁BUNS
- ▁RUNNER
- NDC
- ▁WAH
- ▁JELL
- ENDY
- ▁COMMU
- ▁FARMS
- ▁SLEEVES
- ▁BEETLES
- LOW
- ▁MEATBALL
- ALKIE
- ▁MAGNIF
- ▁CONNIE
- ▁NEIGHBOR
- ▁OPERA
- ▁PINOCCHIO
- ▁SHOEMAKER
- ▁CRAFT
- ▁ONESIX
- ▁FLOW
- WD
- HOO
- ▁PRESENTATIONS
- ▁CHIP
- ITE
- ▁ANIMAT
- ▁DUB
- ▁FLOOD
- ▁KAKAO
- ▁RESU
- ▁UNBELIEVABLE
- ▁GRIN
- ▁HEALTHIER
- ▁SIXTH
- ▁CHOSEN
- ▁LOSER
- ▁BLED
- REALLY
- ▁IGNOR
- ▁PRODUCT
- RIST
- ▁DISCOURAGED
- ▁DODGE
- ▁FORECAST
- ▁OWL
- ▁TREASURE
- ▁UNIFORM
- ▁LOCAT
- ▁TUBE
- DON
- ▁FOLDED
- ▁WEIGH
- ▁RUIN
- ▁CRUSH
- ▁PARAD
- ▁OBESE
- ▁ORGANIZE
- ▁PRINCIPAL
- ▁RATTLING
- ▁RESERVE
- ▁RHYM
- ▁SIP
- ▁UNDERWATER
- ▁TAEG
- ▁TRAVELLING
- ▁STACK
- ▁RI
- ▁BUNDLES
- YEAR
- SAME
- AND
- ▁CHEESECAKE
- ▁EPISODE
- ▁FAMILIES
- ▁FIFTH
- ▁RHINITIS
- ▁SAUNA
- NCHES
- ▁EXCE
- TIQUE
- ▁COMBO
- ▁STRINGS
- ▁COLORFUL
- ▁FLOWS
- ▁COOLEST
- ▁OPPAS
- ATING
- ATE
- ▁MELTS
- ▁CHOPSTICK
- ▁BRANCH
- ▁FRUSTRATED
- ▁GREASY
- ▁EXIST
- ▁WAVING
- ▁APP
- ▁SODA
- ▁FALLEN
- ▁PRO
- SHAPED
- NG
- ▁CONNECTED
- ▁12
- ▁BANDAID
- ▁DISTANCE
- ▁DRAIN
- ▁MEASURE
- ▁TEMPLE
- ▁WORKBOOK
- ▁EIGHTAM
- ▁WARN
- ▁BURNT
- BOARD
- ▁DE
- IFF
- RTH
- ▁MUSHROOMS
- ▁POWERFUL
- STICK
- ▁VOUCHERS
- ▁BLEED
- ▁BRAID
- ▁CREPE
- ▁HAWKING
- ▁FLAM
- ▁SCORE
- ▁RELEASED
- ▁TICKLED
- BU
- FISH
- ATIVE
- CLUSI
- ▁CLINIC
- ▁CROOKED
- ▁RELAY
- ▁SCOOTER
- ▁SEBASTIAN
- ▁SUFFER
- ▁TEENAGER
- ▁BATHHOUSE
- ▁WRIST
- ▁BAKERIES
- ▁BRANCHES
- ▁SAMYUKGU
- ▁SCU
- ENDER
- ▁INGREDIENTS
- ▁INVENTED
- ▁BOWING
- SSES
- WAR
- ▁PRESSED
- ▁SQUEEZ
- SIGNED
- WON
- ▁70
- ▁APPROACH
- ▁CHAPPED
- ▁DUMB
- ▁FREEZING
- ▁MAGNIFIER
- ENTIAL
- IE
- ▁CLOSELY
- ▁DIAPERS
- OUS
- ▁DIRT
- ▁CENTIMETER
- ▁FLOWERPOT
- ▁FOAM
- ▁POLITIC
- ▁PORRIDGE
- ▁PEDIATRICIAN
- ▁FIREWORKS
- ▁TROUBLEMAKER
- ▁PILLAR
- ▁EVACUATE
- ▁SILLA
- EUK
- ANDING
- ▁FAINTED
- ERMAN
- ▁SEAGULL
- ▁CHICKS
- ▁SWEATING
- INGO
- PAPER
- ▁AGREED
- ▁CLAPP
- VA
- ▁STRENGTH
- SOONGSIL
- ‘
- ▁CONVENIENT
- ▁DECEMBER
- ▁FORTUNATELY
- ▁FURNITURE
- ▁HAGWON
- ▁LOUNGE
- ▁MOKDONG
- ▁PALM
- ▁SPRINKLE
- ▁STIRFR
- RUNK
- ▁ANKLE
- ▁SELF
- ▁SEVENTH
- LESS
- ▁DIVING
- ADE
- ▁RANG
- SHINY
- WITH
- ▁BRAVELY
- ▁BADMINTON
- ▁BULGUKSA
- ▁KARAOKE
- ▁ADMIT
- ▁GINGER
- ▁LAID
- ▁SNOWBOARD
- ▁HOPPING
- ▁UDO
- ▁BULGING
- ▁CARP
- ▁FACT
- ▁GROUPS
- ▁ENTERING
- ▁RIP
- ▁MAR
- LOCK
- ▁JE
- ▁ADMISSION
- ▁CHRYSANTHEMUM
- ▁DIARIES
- ▁DISPOSABLE
- ▁LOACH
- ▁PARROT
- ▁SCULPTURE
- ▁TERRIF
- ▁VOLUME
- ▁REPRESENTATIVE
- ▁MEOW
- ▁CHEEK
- ▁JEJUDO
- ▁HARMFUL
- ▁BRUISED
- ▁MINERAL
- AINT
- ▁EDIT
- WARDS
- HY
- ▁VIEW
- ▁EXACT
- ROUGHT
- OCKPAPERSCISSORS
- ▁CHESTNUT
- ▁HAWAII
- ▁PIMPLES
- ▁REMOTE
- ▁SOLUTION
- ▁COMPETE
- ▁SOFTLY
- ▁BUNDLE
- ▁LIP
- ▁GRADER
- WOO
- RIS
- STORY
- DAYS
- COLORED
- FOR
- ▁COLLAPSE
- ▁STEPP
- ▁BRILL
- RSELVES
- ▁ACCORDING
- ▁BACON
- ▁BAEK
- ▁BUTTERFLIES
- ▁COSMOS
- ▁CYCLING
- ▁DISTRICT
- ▁ESTATE
- ▁HUMID
- ▁MERMAID
- ▁PAPRIKA
- ▁PHONICS
- ▁BELONG
- ▁YUKJANG
- ▁ANIMATION
- ▁FLIPP
- ▁DUMPLING
- ▁BLOSSOM
- UNG
- ▁EXPLORE
- ▁INSECTS
- ▁JI
- HEART
- GHTS
- ▁ASTRONAUT
- ▁BELLHAMMER
- ▁LICENSE
- ▁NEPTUNE
- ▁OPPOS
- ▁REFRIGERATOR
- ▁STONEBUSH
- ▁1000
- ▁APPLI
- ▁SUBTRACTION
- ▁HOOD
- ▁WIDER
- ▁BROOM
- ▁UNIVERSITY
- ▁PRINCESSES
- ▁MINT
- ▁PARENT
- ▁PEEING
- ▁ADORE
- DONG
- ▁SP
- ANCE
- ▁EXPLOR
- TTEOKBOKKI
- WHEEL
- ▁ABANDONED
- ▁CALLUSES
- ▁COSMETICS
- ▁LADYBUG
- ▁MARIA
- ▁PRONUNCIATION
- ▁BOUQUET
- ▁SOGGY
- ▁LEFTOVERS
- ▁MIKE
- ▁TANK
- ▁SPAC
- ▁FRAME
- MADE
- IVAL
- ▁YE
- ▁GATHERING
- IAN
- ▁KITTENS
- IBLE
- ▁ABBREVIAT
- ▁CHAPAGETTI
- ▁ENGINES
- ▁EQUIPMENT
- ▁INTERSECTION
- ▁SANITIZER
- ▁DOKDO
- ▁GENERATOR
- ▁MEDIUM
- ▁BALANCE
- ▁CHART
- ▁TELEVISION
- ▁JAJANG
- ▁LOLLY
- ▁PHOTOGRAPH
- ORD
- ▁KKA
- ▁SOLES
- ▁BALM
- ▁DECORATION
- ▁THORN
- ▁ARMY
- ▁YU
- EEK
- NK
- BOY
- LENGTH
- TONY
- HEN
- ▁RELEASE
- ▁LOOSE
- ▁COMPLETE
- KYOCHON
- ▁ARCADE
- ▁BRIM
- ▁CORONA
- ▁CRANE
- ▁CUPCAKE
- ▁KITCHENWARE
- ▁LULLABY
- ▁MODER
- ▁MUSKET
- ▁OBEDIEN
- ▁PIKACHU
- ▁PROVERBS
- ▁SALMON
- ▁YUKGAEJANG
- ▁TANNED
- ▁VILLA
- ▁DIRECTIONS
- ▁CLAY
- ▁ADMIR
- ▁DIRECTOR
- ▁DAMAGED
- ▁BURST
- ▁TOPIC
- ▁DOODLED
- ▁COMPAR
- ▁BUBBLE
- ▁HO
- ▁KISSE
- ▁JO
- ▁BLOATED
- ▁CONSONANTS
- ▁DOWNLOAD
- ▁ELBOW
- ▁FUNNIEST
- ▁PORORO
- ▁SLOTS
- ▁VACUUM
- ▁BOTTOM
- ▁MANDELA
- ▁IMSIL
- ▁VIP
- ▁TOMMY
- EATURE
- ▁PINE
- ▁EIGHTTHIRTY
- ▁HIDEANDSEEK
- ▁COLLAPSED
- ▁UNDERSTOOD
- ▁CRUSHED
- ▁TRI
- OF
- ▁DI
- ▁CARNATION
- ORY
- NAILS
- LENT
- ▁PUBLISH
- PLACE
- ▁CLIP
- ILLA
- ▁SUNSHIN
- ▁ACTUAL
- ▁SUCCESS
- COCK
- ▁60
- ▁BENEFITS
- ▁CLAW
- ▁HAUNT
- ▁LIBRARIES
- ▁LOTTERIA
- ▁MERCURY
- ▁MITTEN
- ▁SWAM
- ▁ROTTEN
- ▁SERVANT
- DENTAL
- ▁LEGEND
- ▁ROT
- ▁PRICKED
- ▁230
- ▁TUB
- ▁WINK
- ▁HUNTER
- ▁SCREAMING
- ▁FINALE
- ▁SOAPY
- ▁REDESIGNING
- NNA
- ▁DIAPER
- ▁BANG
- IK
- CHAN
- TIER
- ▁MOR
- ▁METERS
- ▁HUGG
- DAE
- FTER
- CHO
- SHIP
- EITHER
- CTIVE
- ▁KI
- ▁RU
- ▁BRAND
- ▁AMOUNT
- ▁EXPLANATION
- ▁HAIRPIN
- ▁HORRIBLE
- ▁INTERIOR
- ▁LANDSLIDE
- ▁NEVERTHELESS
- ▁PERSIMMON
- ▁POSTPONE
- ▁SCIENTIST
- ▁SLACK
- ▁STORM
- ▁STREAM
- ▁SURPRISING
- ▁URGENT
- ▁ZOMBIE
- ▁STOOL
- ▁LOAD
- NAMBU
- ▁ANNOUNCEMENT
- IKES
- GRAN
- ▁ABC
- ▁COMPLE
- ▁FASCINATING
- ▁REMOVED
- ▁CRAWLING
- ▁INTERRUPTING
- RELLA
- RAGE
- ▁PEELING
- ▁HUMANS
- ▁MON
- ▁BEGIN
- ▁VEGETABLE
- ▁SLEEVE
- GLE
- ▁THA
- ISH
- TRAINER
- '7'
- ROAD
- DRIVER
- ▁PRETEN
- ▁ALLOW
- UZZLE
- ▁DEMONSTRAT
- ▁STIR
- ▁BROC
- ▁CARCASON
- ▁EQUALLY
- ▁EXPERIMENT
- ▁HESITAT
- ▁SPINNING
- ▁MENTOR
- ▁ABBREVIATION
- ▁RASHES
- ▁ASSEMBLING
- ▁DUNG
- MEMOR
- ▁PEACEFUL
- ▁HARDENS
- OSU
- SSUED
- ▁FRECKLE
- TIOUS
- ▁REALIZ
- ▁SQUA
- LIFE
- THINK
- ▁BIK
- ▁KNIT
- ZZA
- ▁ALITTLE
- ▁BAREFOOT
- ▁CONCENTRATE
- ▁DALGONA
- ▁GUIDEBOOK
- ▁KIDZANIA
- ▁PALACE
- ▁ROSHEN
- ▁TEXTBOOK
- ▁TUNAKIMBAP
- OTTEOK
- ▁830
- ▁HOSE
- ITIES
- NIX
- ▁FIFTEENCM
- ▁IMAGE
- ▁CHEESEKIMBAP
- ▁HOTTER
- ▁PATT
- ▁CLIPPE
- ▁FOXES
- EAGLE
- ▁QUE
- NDING
- ▁DETER
- AP
- YEO
- UED
- ▁PAI
- ▁EXCITEDLY
- ▁WAVED
- ▁BUL
- BUT
- ▁METER
- KIMBAP
- HAND
- WATCHING
- ▁CONVERS
- ▁FLICK
- ▁PEDIATRIC
- NAMENT
- REIGN
- ▁BIKINI
- ▁BUCKWHEATCREPE
- ▁JENGA
- ▁LAUNCH
- ▁OPTICIAN
- ▁PIGTAIL
- ▁SIMON
- ▁SUBSCRIBE
- ▁TICKLISH
- NELS
- ▁PINWHEEL
- INATED
- ▁DRUG
- ▁ONESIXCM
- ▁EIGHTH
- ▁SMARTEST
- ▁HUNTING
- ▁PIL
- UMMY
- ITION
- UNNI
- ▁SU
- ▁POWERFULL
- ▁WAFFLE
- DIA
- ▁TICK
- EIGHT
- PICKED
- FIFTY
- WENT
- ▁BOT
- ▁REPRESENT
- OKKI
- ▁COCOA
- ▁CUSHION
- ▁FARTHEST
- ▁PENTAGON
- ▁SLIDING
- ▁SWEAR
- ▁MOLD
- ▁BBOY
- ▁80
- ▁WATERPROOF
- ▁RAIL
- ▁CREATED
- ▁CHIRPING
- ▁SEARCH
- SEOK
- ▁TOAST
- ▁BETRAYE
- JOR
- ▁NI
- ZI
- ▁SLAMM
- ▁GU
- ▁NAG
- ▁SERVED
- UFFY
- ▁INSECT
- ▁ZIPPE
- LP
- YEONG
- ESSION
- IPPED
- ▁CELEBRAT
- ▁CHANG
- '50'
- POST
- ENTI
- ▁DISAPPOINT
- ▁QU
- ▁FOREIGN
- ▁POSSIB
- ▁CONGRATULAT
- ADOW
- ▁TAE
- CAFÉ
- ▁COURIER
- ▁DAEJEON
- ▁DOWNSTAIRS
- ▁EXPER
- ▁PREFERENCE
- ▁LACT
- ▁OCCUR
- ORIENT
- ▁SPACIOUS
- INARY
- ▁KNITTING
- ▁LIBERTY
- VILLE
- RB
- ▁BARKED
- DAN
- ▁TIN
- ATOR
- ▁PHO
- RIED
- ▁JINDA
- OUND
- HOE
- ▁STRETCHE
- ▁SNEEZ
- EVI
- QUALITY
- MOM
- ▁BLIND
- HYEON
- ECTION
- ROKE
- ▁ANCHOVY
- ▁ASHAMED
- ▁COASTER
- ▁CONFUSING
- ▁CYCLIST
- ▁DANDELION
- ▁FIREFLIES
- ▁HYUNG
- ▁KNOWLEDGE
- ▁NARACULA
- ▁SCAB
- ▁VOCABULARY
- ▁CONFIDENT
- ▁RELAT
- ▁FOOLISH
- ▁NINEAM
- ▁ZO
- ▁BOU
- ▁FLATTERED
- ▁BLINDING
- ▁SKATER
- ▁ROLLER
- ▁FIRM
- COTT
- NURI
- ▁WARMER
- ▁LONGEST
- ▁TICKLE
- ▁AMERICAN
- GI
- AGGED
- CHARGE
- TODAY
- ▁CREATE
- UMPING
- JJAEK
- ▁BEGINNER
- ▁CLICKING
- ▁CORRIDORS
- ▁DAZZLING
- ▁DERMATOLOGIST
- ▁DILIGENT
- ▁FEBRUARY
- ▁FISHBOWL
- ▁GARAETTEOK
- ▁GARGLE
- ▁INJURED
- ▁MANTISES
- ▁NAKSEONGDAE
- ▁ROAST
- ▁SNITCH
- ▁SLIMMER
- ▁DISCHARGE
- ▁SOAKED
- ▁SELECTED
- ▁VICE
- ▁INFECT
- ▁CONTAINER
- ▁NEATLY
- ▁STARSHAPED
- LOTTEWORLD
- ▁SUPPLEMENT
- ▁EIGHTTH
- ISTERS
- ▁TICKL
- ▁STRAIGHTEN
- ▁SKINN
- RANGE
- ▁TANGERINE
- ▁STO
- PREPARED
- SPROUT
- TWELVE
- TONIGHT
- ▁RECOGNI
- VAN
- BEEN
- ▁EXPLODE
- ▁CHUBB
- ANGGU
- ▁SAVI
- ▁950
- ▁ADJUST
- ▁CASTANETS
- ▁FAITH
- ▁GONGJU
- ▁GRAIN
- ▁GROSS
- ▁JUPITER
- ▁MAGPIE
- ▁SAIPAN
- ▁SKULL
- ▁SPARROW
- ▁VACCINATED
- ▁VIGOROUSLY
- ▁AUTOMATIC
- ▁NEARBY
- SEVENTEEN
- ▁TWENTI
- ▁NIKE
- ▁SEORA
- DATORS
- ▁PONG
- ▁730
- ▁SCARIER
- ▁TRUNK
- ▁BETRAYER
- ▁CHEESEGIMBAP
- ONGDAE
- ▁SEVERE
- ▁SPOONFUL
- CTATION
- ▁WITCH
- ▁LIMIT
- ▁EATTTEOKBOKKI
- GEOUS
- ▁CRAWLED
- ▁SUC
- AVED
- AGE
- ▁KITTEN
- ▁SKEWER
- IZED
- ▁TEAR
- WAVE
- ▁RACI
- ▁CONTAIN
- ▁TRO
- ▁GUGUDAN
- ▁GEPPET
- ▁PHARMACI
- MULGUK
- PPAK
- SAMJANG
- ▁ACORN
- ▁APPETITE
- ▁BRUNCH
- ▁BUMMER
- ▁DIARRHEA
- ▁FLAP
- ▁GERMS
- ▁GWANSUN
- ▁HOMETOWN
- ▁KILOMETERS
- ▁MARRIAGE
- ▁PRANKS
- ▁RADISH
- '5'
- ′
- 수
- '2'
- ́
- 子
- 예
- 요
- '3'
- É
- '6'
- '9'
- “
- .
- '1'
- 단
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/ko_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ko_bpe5000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: contextual_block_transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
cherrue/RandomCrop_Rescale_epoch_3_learning_rate_5e_5_decay_0_01 | cherrue | 2023-07-06T06:30:06Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T05:35:06Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cherrue/pricetag_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cherrue/pricetag_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0546
- Validation Loss: 1.2226
- Train Accuracy: 0.3846
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1251, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3379 | 1.2276 | 0.5128 | 0 |
| 1.1973 | 1.1561 | 0.4615 | 1 |
| 1.0546 | 1.2226 | 0.3846 | 2 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Waterhorse/chessgpt-base-v1 | Waterhorse | 2023-07-06T06:19:40Z | 83 | 6 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:Waterhorse/chess_data",
"arxiv:2306.09200",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-02T22:03:14Z | ---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
---
# Chessgpt-Base-3B-v1
Chessgpt-Base-v1 is the base model of Chessgpt.
- Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1)
- Chat Version: [chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1)
Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact [email protected].
## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.
## GPU Inference
This requires a GPU with 8GB memory.
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
MIN_TRANSFORMERS_VERSION = '4.25.1'
# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
# init
tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-base-v1")
model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-base-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')
# infer
# Conversation between two
prompt = "Q: 1.e4 c5, what is the name of this opening?A:"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
**inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
```
# Uses
Excluded uses are described below.
### Direct Use
`chessgpt-base-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.
#### Out-of-Scope Use
`chessgpt-base-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.
#### Bias, Risks, and Limitations
Just as with any language model, chessgpt-base-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.
# Evaluation
Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results.
# Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
``` |
saintzeno/a2c-PandaReachDense-v3 | saintzeno | 2023-07-06T06:10:45Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T05:52:59Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ryukijano/whisper-small-dv | Ryukijano | 2023-07-06T05:36:17Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T06:25:50Z | ---
license: mit
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
---
---
# Whisper Small DV Model

## Model Description
The `whisper-small-dv` model is an advanced Automatic Speech Recognition (ASR) model, trained on the extensive [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/en/datasets) dataset. This model is capable of transcribing spoken language into written text with high accuracy, making it a valuable tool for a wide range of applications, from transcription services to voice assistants.
## Training
The model was trained using the PyTorch framework and the Transformers library. Training metrics and visualizations can be viewed on TensorBoard.
## Performance
The model's performance was evaluated on a held-out test set. The evaluation metrics and results can be found in the "Eval Results" section.
## Usage
The model can be used for any ASR task. To use the model, you can load it using the Transformers library:
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load the model
model = Wav2Vec2ForCTC.from_pretrained("Ryukijano/whisper-small-dv")
processor = Wav2Vec2Processor.from_pretrained("Ryukijano/whisper-small-dv")
# Use the model for ASR
inputs = processor("path_to_audio_file", return_tensors="pt", padding=True)
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
```
## License
This model is released under the MIT license.
---
P |
eigenscribe/etzHayim | eigenscribe | 2023-07-06T05:34:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T05:33:49Z | ---
license: creativeml-openrail-m
---
|
insub/distilbert-base-uncased-finetuned-imdb | insub | 2023-07-06T05:22:05Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T05:17:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_random | aroot | 2023-07-06T05:13:07Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T04:53:15Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1475
- Bleu: 31.8135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nhung/layoutxlm-de-durch | nhung | 2023-07-06T05:02:06Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-06T02:20:52Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-de-durch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-de-durch
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.25e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 15000
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.10.0+cu111
- Datasets 2.11.0
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s45 | squeeze-ai-lab | 2023-07-06T04:46:32Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-06T03:46:53Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.45%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
mazeinmouse/a2c-AntBulletEnv-v0 | mazeinmouse | 2023-07-06T04:34:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T04:33:37Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1651.08 +/- 126.30
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
headflame02/AchaxV4 | headflame02 | 2023-07-06T04:30:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T04:29:24Z | ---
license: creativeml-openrail-m
---
|
aroot/eng-mya-wsample.32a | aroot | 2023-07-06T04:23:10Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T04:01:01Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-wsample.32a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.32a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8284
- Bleu: 4.7194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lovelyxs/PPO-LunarLander-v2 | lovelyxs | 2023-07-06T04:11:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T03:54:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.53 +/- 16.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
digiplay/CoffeeMix_v1 | digiplay | 2023-07-06T03:55:09Z | 307 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T02:17:13Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/40630?modelVersionId=45847
Sample image I made :

Original Author's DEMO images :


 |
aroot/eng-guj-wsample.43a | aroot | 2023-07-06T03:44:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T03:21:38Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2191
- Bleu: 2.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mrizalf7/t5-small-finetuned-xsum | mrizalf7 | 2023-07-06T03:42:24Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-06T00:32:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-guj-wsample.32a | aroot | 2023-07-06T03:39:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T03:21:39Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-wsample.32a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-wsample.32a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2257
- Bleu: 3.1070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sandrro/text_to_subfunction_v6 | Sandrro | 2023-07-06T03:24:24Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T20:05:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_subfunction_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_subfunction_v6
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2720
- F1: 0.4415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5055 | 1.0 | 4365 | 3.4067 | 0.1639 |
| 2.5598 | 2.0 | 8730 | 2.6935 | 0.2833 |
| 2.1499 | 3.0 | 13095 | 2.3594 | 0.3420 |
| 1.6575 | 4.0 | 17460 | 2.2243 | 0.3921 |
| 1.2463 | 5.0 | 21825 | 2.1722 | 0.4105 |
| 0.9624 | 6.0 | 26190 | 2.1955 | 0.4341 |
| 0.7407 | 7.0 | 30555 | 2.2434 | 0.4449 |
| 0.5608 | 8.0 | 34920 | 2.3604 | 0.4329 |
| 0.4233 | 9.0 | 39285 | 2.4747 | 0.4361 |
| 0.2433 | 10.0 | 43650 | 2.5562 | 0.4404 |
| 0.2154 | 11.0 | 48015 | 2.6678 | 0.4374 |
| 0.1811 | 12.0 | 52380 | 2.8158 | 0.4341 |
| 0.1374 | 13.0 | 56745 | 2.9037 | 0.4425 |
| 0.1406 | 14.0 | 61110 | 3.0182 | 0.4366 |
| 0.1135 | 15.0 | 65475 | 3.0941 | 0.4440 |
| 0.0992 | 16.0 | 69840 | 3.1516 | 0.4437 |
| 0.1159 | 17.0 | 74205 | 3.2001 | 0.4418 |
| 0.0809 | 18.0 | 78570 | 3.2489 | 0.4373 |
| 0.1035 | 19.0 | 82935 | 3.2650 | 0.4407 |
| 0.0558 | 20.0 | 87300 | 3.2720 | 0.4415 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
BaoKien/deberta-base-finetuned-squad-v2 | BaoKien | 2023-07-06T03:22:36Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T01:19:43Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: deberta-base-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad-v2
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.753 | 1.0 | 8238 | 0.7286 |
| 0.5378 | 2.0 | 16476 | 0.7578 |
| 0.3881 | 3.0 | 24714 | 0.9221 |
### Performance
- 'exact': 81.84115219405373
- 'f1': 85.19125695340612
- 'total': 11873
- 'HasAns_exact': 80.24628879892038
- 'HasAns_f1': 86.95610556811602
- 'HasAns_total': 5928
- 'NoAns_exact': 83.43145500420522
- 'NoAns_f1': 83.43145500420522
- 'NoAns_total': 5945
- 'best_exact': 81.84115219405373
- 'best_exact_thresh': 0.9994916319847107
- 'best_f1': 85.19125695340657
- 'best_f1_thresh': 0.9994916319847107
- 'total_time_in_seconds': 294.34524957099984
- 'samples_per_second': 40.33698528277447
- 'latency_in_seconds': 0.024791143735450168
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-xgen-7b-8k-inst-w3-s0 | squeeze-ai-lab | 2023-07-06T03:15:42Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:32:13Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-inst-w4-s0 | squeeze-ai-lab | 2023-07-06T03:15:32Z | 0 | 1 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:33:19Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B instruction-tuned model (i.e. finetuned model on public domain instructional data) with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-inst).
* **Base Model:** [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w4-s0 | squeeze-ai-lab | 2023-07-06T03:14:48Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:31:51Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-xgen-7b-8k-base-w3-s0 | squeeze-ai-lab | 2023-07-06T03:14:31Z | 0 | 0 | null | [
"arxiv:2306.07629",
"region:us"
] | null | 2023-07-05T23:31:15Z | **SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit XGen-7B Base model with 8K sequence length quantized using SqueezeLLM.
More details on the quantization method can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
More detailed model descriptions can be found in the [link](https://huggingface.co/Salesforce/xgen-7b-8k-base).
* **Base Model:** [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base) (by Salesforce AI Research)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0% (dense-only)
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
h2oai/h2ogpt-research-oasst1-llama-65b | h2oai | 2023-07-06T03:11:31Z | 1,502 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"open-source",
"en",
"dataset:h2oai/openassistant_oasst1_h2ogpt_graded",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-13T18:11:13Z | ---
license: other
language:
- en
library_name: transformers
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
datasets:
- h2oai/openassistant_oasst1_h2ogpt_graded
---
# h2oGPT Model Card
## Summary
H2O.ai's `h2ogpt-research-oasst1-llama-65b` is a 65 billion parameter instruction-following large language model (NOT licensed for commercial use).
- Base model: [decapoda-research/llama-65b-hf](https://huggingface.co/decapoda-research/llama-65b-hf)
- Fine-tuning dataset: [h2oai/openassistant_oasst1_h2ogpt_graded](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt_graded)
- Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
- Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/llama-65b-hf.h2oaiopenassistant_oasst1_h2ogpt_graded.1_epochs.113510499324f0f007cbec9d9f1f8091441f2469.3.zip)
## Chatbot
- Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt)
[](https://github.com/h2oai/h2ogpt)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the following libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.1
pip install einops==0.6.1
```
```python
import torch
from transformers import pipeline, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
generate_text = pipeline(model="h2oai/h2ogpt-research-oasst1-llama-65b", tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/h2oai_pipeline.py),
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", torch_dtype=torch.bfloat16, device_map="auto")
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type="human_bot")
res = generate_text("Why is drinking water so healthy?", max_new_tokens=100)
print(res[0]["generated_text"])
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 8192, padding_idx=31999)
(layers): ModuleList(
(0-79): 80 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=8192, out_features=8192, bias=False)
(k_proj): Linear(in_features=8192, out_features=8192, bias=False)
(v_proj): Linear(in_features=8192, out_features=8192, bias=False)
(o_proj): Linear(in_features=8192, out_features=8192, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=8192, out_features=22016, bias=False)
(down_proj): Linear(in_features=22016, out_features=8192, bias=False)
(up_proj): Linear(in_features=8192, out_features=22016, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=8192, out_features=32000, bias=False)
)
```
## Model Configuration
```json
LlamaConfig {
"_name_or_path": "h2oai/h2ogpt-research-oasst1-llama-65b",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 0,
"custom_pipelines": {
"text-generation": {
"impl": "h2oai_pipeline.H2OTextGenerationPipeline",
"pt": "AutoModelForCausalLM"
}
},
"eos_token_id": 1,
"hidden_act": "silu",
"hidden_size": 8192,
"initializer_range": 0.02,
"intermediate_size": 22016,
"max_position_embeddings": 2048,
"max_sequence_length": 2048,
"model_type": "llama",
"num_attention_heads": 64,
"num_hidden_layers": 80,
"pad_token_id": -1,
"rms_norm_eps": 1e-05,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.1",
"use_cache": true,
"vocab_size": 32000
}
```
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
TBD
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
aroot/eng-fra-wsample.32a | aroot | 2023-07-06T03:04:47Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T02:44:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-wsample.32a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-wsample.32a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1237
- Bleu: 32.9191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bellaaazzzzz/models_fill | Bellaaazzzzz | 2023-07-06T02:41:19Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-06T02:35:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-Bellaaazzzzz/models_fill
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
Validation result of 1 round.

Validation result of 2 round.

|
csikasote/wav2vec2-large-mms-1b-bem-colab | csikasote | 2023-07-06T02:40:47Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T22:46:43Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-bem-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-bem-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1638
- Wer: 0.3223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2095 | 1.03 | 200 | 0.2066 | 0.3850 |
| 0.4028 | 2.06 | 400 | 0.1861 | 0.3539 |
| 0.3751 | 3.09 | 600 | 0.1781 | 0.3417 |
| 0.3631 | 4.12 | 800 | 0.1739 | 0.3392 |
| 0.3481 | 5.15 | 1000 | 0.1688 | 0.3340 |
| 0.3391 | 6.19 | 1200 | 0.1690 | 0.3319 |
| 0.3301 | 7.22 | 1400 | 0.1654 | 0.3285 |
| 0.3237 | 8.25 | 1600 | 0.1667 | 0.3262 |
| 0.3186 | 9.28 | 1800 | 0.1638 | 0.3223 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
saintzeno/a2c-AntBulletEnv-v0 | saintzeno | 2023-07-06T02:12:44Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T01:49:03Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1201.73 +/- 71.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Huggingfly/Reinforce-Cartpole-v1 | Huggingfly | 2023-07-06T01:38:51Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T01:38:41Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
geovani2006/openchatgpt | geovani2006 | 2023-07-06T01:14:01Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-06T01:14:01Z | ---
license: bigscience-openrail-m
---
|
anujsahani01/finetuned_Mbart_mr_en | anujsahani01 | 2023-07-06T01:08:06Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-05T17:34:56Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuned_Mbart_mr_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_Mbart_mr_en
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dmatekenya/whisper-small_finetuned_sw_chich | dmatekenya | 2023-07-06T00:54:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T20:02:12Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small_finetuned_sw_chich
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small_finetuned_sw_chich
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7430
- Wer: 80.1992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0324 | 4.39 | 500 | 1.5624 | 84.6754 |
| 0.0151 | 8.77 | 1000 | 1.6639 | 82.4073 |
| 0.0099 | 13.16 | 1500 | 1.7377 | 78.8912 |
| 0.0081 | 17.54 | 2000 | 1.7430 | 80.1992 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jeffreykthomas/bloom-7b-fine-tuned-stanford | jeffreykthomas | 2023-07-06T00:51:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"question-answering",
"license:bigscience-openrail-m",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-30T21:42:30Z | ---
license: bigscience-openrail-m
---
|
chaudha7/DiaryFlow | chaudha7 | 2023-07-06T00:49:49Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-06T00:43:32Z |
### Model Description
This is a fine-tuned Bloom-7b model. It was a demo project which I wanted to try to alleviate the seriousness and rapid pace around the "LLM" usecases.
This model has been trained on a custom chatGPT-created dataset (https://huggingface.co/datasets/chaudha7/Diary-Entry-To-Rap) which turns diary entries into short rap songs with a motivational ending. Just something fun for the users.
### Developed by: Aashay Chaudhari
## Training procedure
The following `bits and bytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
### Results
### Input:
Dear Diary,
Oh boy, today was such an incredible day! I woke up with a burst of excitement in my heart because it was my birthday! Double digits, baby! Mom and Dad surprised
me with a brand new bike, just like the one I've been dreaming about for ages. We went for a ride in the park, and I felt like I was flying!
My friends came over for a super fun party, and we played games, danced, and laughed until our tummies hurt. Grandma baked the most delicious chocolate cake,
and I blew out all the candles with one big breath. The best part? I got some awesome presents, like a shiny diary (just like you, Diary!), art supplies, and
a book about adventures. I can't wait to read it tonight!
Tonight, as I snuggle under my cozy blanket, I can't help but feel so grateful for the love and joy that surrounds me. Being 10 feels amazing, Diary.
I can't wait to see what other incredible adventures await me this year!
Until tomorrow,
Emily
### Output:
(Verse)
Today's my birthday,
feeling so alive,
Double digits, yo,
I'm ready to strive.
Got a brand-new bike,
flying through the air,
Riding in the park,
without a single care.
Friends came over,
we had a blast,
Playing games, dancing,
memories that'll last.
Grandma's chocolate cake,
candles all aglow,
Blowing them out,
a wish ready to grow.
Shiny diary, art supplies, and books,
Unleashing my creativity, no limits, no hooks.
Grateful for love and joy, can't help but smile,
Ten years old, embracing life's every mile.
(Motivational Ending)
No matter the age, dreams light up the way,
Believe in yourself, make the most of today.
Adventures await, with each step you take,
Keep shining, keep growing, there's no limit to what you'll make.
|
CalamitousVisibility/UK_Energy_Industry_Complaints_Identifier_ver1 | CalamitousVisibility | 2023-07-06T00:28:38Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T22:24:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: UK_Energy_Industry_Complaints_Identifier_ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UK_Energy_Industry_Complaints_Identifier_ver1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a balanced dataset consisting of 17,620
publicy available customer reviews of various domestic energy suppliers in the United Kingdom.
It achieves the following results on the evaluation set:
- Loss: 0.3369
- Accuracy: 0.9561
- F1: [0.95594347 0.95621041]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
Lucas-lab/distilbert-base-uncased-finetuned-cola | Lucas-lab | 2023-07-06T00:13:07Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-02T20:28:15Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Lucas-lab/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lucas-lab/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1848
- Validation Loss: 0.5885
- Train Matthews Correlation: 0.5019
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5153 | 0.4879 | 0.4331 | 0 |
| 0.3121 | 0.5405 | 0.4874 | 1 |
| 0.1848 | 0.5885 | 0.5019 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
eluzhnica/mpt-7b-instruct-peft-compatible | eluzhnica | 2023-07-05T23:35:23Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-05T23:14:18Z | ---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-7B-Instruct
This is the MPT-7B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-7B-Instruct.
I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
https://huggingface.co/mosaicml/mpt-7b/discussions/42.
The original description for MosaicML team below:
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
ahmedALM1221/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50 | ahmedALM1221 | 2023-07-05T23:21:55Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-04T18:45:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9753340184994861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-eurosat-50
This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0909
- Accuracy: 0.9753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0236 | 1.0 | 122 | 1.9878 | 0.1305 |
| 1.88 | 2.0 | 244 | 1.7957 | 0.2867 |
| 1.5421 | 3.0 | 366 | 1.3813 | 0.5149 |
| 0.9489 | 4.0 | 488 | 0.9015 | 0.7030 |
| 0.8734 | 5.0 | 610 | 0.6616 | 0.7667 |
| 0.6562 | 6.0 | 732 | 0.5095 | 0.8140 |
| 0.5788 | 7.0 | 854 | 0.4036 | 0.8520 |
| 0.6737 | 8.0 | 976 | 0.3157 | 0.8921 |
| 0.4687 | 9.0 | 1098 | 0.2146 | 0.9281 |
| 0.3775 | 10.0 | 1220 | 0.2020 | 0.9353 |
| 0.3226 | 11.0 | 1342 | 0.1549 | 0.9558 |
| 0.2452 | 12.0 | 1464 | 0.0909 | 0.9753 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-mya-simcse.near2.4440 | hopkins | 2023-07-05T22:49:46Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T22:28:28Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse.near2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse.near2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8502
- Bleu: 4.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-mya-simcse.dev2.4440 | hopkins | 2023-07-05T22:46:19Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T22:24:42Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse.dev2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse.dev2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8287
- Bleu: 4.8012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
spacemanidol/flan-t5-large-xsum | spacemanidol | 2023-07-05T22:16:56Z | 106 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-02-17T21:56:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: large
results:
- task:
name: Summarization
type: summarization
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 39.7125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# large
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 39.7125
- Rouge2: 16.3418
- Rougel: 31.7246
- Rougelsum: 31.7214
- Gen Len: 26.7397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
asenella/mmnist_MMVAEPlusconfig_resnet_seed_0_ratio_0_c | asenella | 2023-07-05T22:07:37Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-07-05T22:07:20Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
hopkins/eng-guj-simcse.dev2.4440 | hopkins | 2023-07-05T22:04:11Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T21:47:27Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse.dev2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse.dev2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2375
- Bleu: 2.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nekoboost/ppo-LunarLander-v2 | nekoboost | 2023-07-05T21:58:49Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T21:58:34Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -202.98 +/- 120.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
josero23/irrut | josero23 | 2023-07-05T21:55:44Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-05T21:42:44Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### irrut Dreambooth model trained by josero23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
newconew/speecht5_finetuned_voxpopuli_nl | newconew | 2023-07-05T21:55:25Z | 80 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-05T19:33:24Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5194 | 4.3 | 1000 | 0.4806 |
| 0.494 | 8.61 | 2000 | 0.4670 |
| 0.4929 | 12.91 | 3000 | 0.4642 |
| 0.4914 | 17.21 | 4000 | 0.4612 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Deigant/t5-base-daily-dialog-finetuned-1 | Deigant | 2023-07-05T21:51:36Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-05T20:43:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-daily-dialog-finetuned-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-daily-dialog-finetuned-1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6520
- eval_runtime: 29.7693
- eval_samples_per_second: 271.051
- eval_steps_per_second: 16.964
- epoch: 0.83
- step: 4500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-fra-simcse.dev2.4440 | hopkins | 2023-07-05T21:32:34Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T21:12:42Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse.dev2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse.dev2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1146
- Bleu: 33.6862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jeffboudier/vision-transformers-spain-or-italy-fan | jeffboudier | 2023-07-05T21:29:05Z | 296 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vision-transformers--spain-or-italy-fan
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5666666626930237
---
# vision-transformers--spain-or-italy-fan
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### italy soccer fan

#### spain soccer fan
 |
cleandata/whisper-small-dv | cleandata | 2023-07-05T21:27:43Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-05T20:25:03Z | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - local
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.245470668011267
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Wer Ortho: 62.1074
- Wer: 13.2455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1233 | 1.63 | 500 | 0.1680 | 62.1074 | 13.2455 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kejolong/kunoichi | kejolong | 2023-07-05T21:23:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-05T21:21:08Z | ---
license: creativeml-openrail-m
---
|
KevinQuijano/model | KevinQuijano | 2023-07-05T21:12:27Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-05T14:32:19Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - KevinQuijano/model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
NasimB/gpt2-concat-aochiles-14k | NasimB | 2023-07-05T20:51:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-05T18:35:59Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochiles-14k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochiles-14k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7211 | 0.29 | 500 | 5.6349 |
| 5.3799 | 0.59 | 1000 | 5.1983 |
| 5.0235 | 0.88 | 1500 | 4.9554 |
| 4.7478 | 1.18 | 2000 | 4.8045 |
| 4.5979 | 1.47 | 2500 | 4.6854 |
| 4.4961 | 1.76 | 3000 | 4.5843 |
| 4.3569 | 2.06 | 3500 | 4.5164 |
| 4.1739 | 2.35 | 4000 | 4.4680 |
| 4.149 | 2.65 | 4500 | 4.4129 |
| 4.1093 | 2.94 | 5000 | 4.3581 |
| 3.8978 | 3.24 | 5500 | 4.3622 |
| 3.8629 | 3.53 | 6000 | 4.3327 |
| 3.8463 | 3.82 | 6500 | 4.3044 |
| 3.726 | 4.12 | 7000 | 4.3127 |
| 3.5714 | 4.41 | 7500 | 4.3116 |
| 3.5846 | 4.71 | 8000 | 4.2872 |
| 3.5668 | 5.0 | 8500 | 4.2693 |
| 3.3167 | 5.29 | 9000 | 4.3073 |
| 3.3274 | 5.59 | 9500 | 4.3060 |
| 3.3202 | 5.88 | 10000 | 4.3010 |
| 3.2207 | 6.18 | 10500 | 4.3137 |
| 3.1707 | 6.47 | 11000 | 4.3147 |
| 3.1663 | 6.76 | 11500 | 4.3166 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
choward/csv | choward | 2023-07-05T20:46:13Z | 0 | 0 | null | [
"text-generation",
"region:us"
] | text-generation | 2023-07-05T20:42:22Z | ---
pipeline_tag: text-generation
--- |
Gaborandi/Clinical-Longformer-MLM-pubmed | Gaborandi | 2023-07-05T20:42:18Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-22T02:00:45Z | ---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-MLM-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-MLM-pubmed
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 471 | 1.3858 |
| No log | 2.0 | 942 | 1.3160 |
| No log | 3.0 | 1413 | 1.2951 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Gaborandi/Bert_news_classifier | Gaborandi | 2023-07-05T20:42:02Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-10T06:40:52Z | - Fine-tuned BERT model on news data
- the model used this data: https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english
- the model achieved AUC score = 0.9990 with just 3 epochs
|
Gaborandi/MedBERT-breastcancer | Gaborandi | 2023-07-05T20:41:38Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-31T18:51:41Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MedBERT-breastcancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedBERT-breastcancer
This model is a fine-tuned version of [Charangan/MedBERT](https://huggingface.co/Charangan/MedBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 12263 | 1.0881 |
| No log | 2.0 | 24526 | 1.0259 |
| No log | 3.0 | 36789 | 0.9937 |
| No log | 4.0 | 49052 | 0.9831 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.13.2
|
egarciamartin/poca-SoccerTwos | egarciamartin | 2023-07-05T20:40:50Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-05T20:40:07Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: egarciamartin/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vinson099/DatasetModel | vinson099 | 2023-07-05T20:34:01Z | 191 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-05T18:00:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: DatasetModel
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: flower_photos
split: train[:500]
args: flower_photos
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DatasetModel
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 1.2651 | 0.99 |
| 1.3875 | 1.92 | 12 | 0.7931 | 1.0 |
| 1.3875 | 2.88 | 18 | 0.6457 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SaffalPoosh/falcon_7B_instruct_safetensors | SaffalPoosh | 2023-07-05T20:27:23Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"RefinedWebModel",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-05T20:13:30Z | Converted using oobabooga script to safetensors to test the TGI LLM inference engine |
durdana/alpaca7B-lora | durdana | 2023-07-05T20:25:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-05T20:25:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted | jordyvl | 2023-07-05T20:02:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T17:53:13Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_weighted
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0783
- Accuracy: 0.71
- Exit 0 Accuracy: 0.115
- Exit 1 Accuracy: 0.1575
- Exit 2 Accuracy: 0.185
- Exit 3 Accuracy: 0.0875
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7602 | 0.1125 | 0.0925 | 0.0675 | 0.0875 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7309 | 0.115 | 0.1175 | 0.0675 | 0.1075 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6967 | 0.1325 | 0.095 | 0.06 | 0.1175 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6631 | 0.17 | 0.085 | 0.0575 | 0.1275 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6242 | 0.205 | 0.085 | 0.0575 | 0.1225 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5736 | 0.2175 | 0.0875 | 0.0825 | 0.12 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5410 | 0.215 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 |
| No log | 7.72 | 16 | 2.5229 | 0.2325 | 0.1 | 0.0925 | 0.13 | 0.0625 | 0.0625 |
| No log | 8.72 | 18 | 2.4841 | 0.2525 | 0.1 | 0.1 | 0.1325 | 0.0625 | 0.0625 |
| No log | 9.72 | 20 | 2.4382 | 0.29 | 0.1 | 0.1025 | 0.1325 | 0.0625 | 0.0625 |
| No log | 10.72 | 22 | 2.3823 | 0.3 | 0.1 | 0.1275 | 0.1325 | 0.0625 | 0.0625 |
| No log | 11.72 | 24 | 2.3389 | 0.3275 | 0.1 | 0.1175 | 0.1225 | 0.0625 | 0.0625 |
| No log | 12.72 | 26 | 2.3002 | 0.35 | 0.0975 | 0.12 | 0.1225 | 0.0625 | 0.0625 |
| No log | 13.72 | 28 | 2.2421 | 0.36 | 0.0975 | 0.125 | 0.1275 | 0.0625 | 0.0625 |
| No log | 14.72 | 30 | 2.2026 | 0.3575 | 0.1025 | 0.13 | 0.125 | 0.0625 | 0.0625 |
| No log | 15.72 | 32 | 2.1712 | 0.375 | 0.105 | 0.1375 | 0.125 | 0.0625 | 0.0625 |
| No log | 16.72 | 34 | 2.0999 | 0.4075 | 0.1 | 0.145 | 0.125 | 0.0625 | 0.0625 |
| No log | 17.72 | 36 | 2.0414 | 0.4225 | 0.1025 | 0.145 | 0.1275 | 0.0625 | 0.0625 |
| No log | 18.72 | 38 | 1.9981 | 0.4375 | 0.0975 | 0.1425 | 0.13 | 0.0625 | 0.0625 |
| No log | 19.72 | 40 | 1.9369 | 0.4575 | 0.1025 | 0.14 | 0.1425 | 0.0625 | 0.0625 |
| No log | 20.72 | 42 | 1.8903 | 0.4975 | 0.1025 | 0.14 | 0.145 | 0.0625 | 0.0625 |
| No log | 21.72 | 44 | 1.8242 | 0.525 | 0.1025 | 0.1425 | 0.15 | 0.0625 | 0.0625 |
| No log | 22.72 | 46 | 1.7520 | 0.5325 | 0.11 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 23.72 | 48 | 1.7203 | 0.5525 | 0.1125 | 0.1475 | 0.1525 | 0.0625 | 0.0625 |
| No log | 24.72 | 50 | 1.6753 | 0.565 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 25.72 | 52 | 1.6245 | 0.575 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 26.72 | 54 | 1.5832 | 0.61 | 0.11 | 0.15 | 0.1525 | 0.0625 | 0.0625 |
| No log | 27.72 | 56 | 1.5404 | 0.61 | 0.11 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 28.72 | 58 | 1.4958 | 0.6125 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 |
| No log | 29.72 | 60 | 1.4613 | 0.6325 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 |
| No log | 30.72 | 62 | 1.4479 | 0.63 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 |
| No log | 31.72 | 64 | 1.4101 | 0.64 | 0.1125 | 0.1525 | 0.165 | 0.0625 | 0.0625 |
| No log | 32.72 | 66 | 1.3699 | 0.655 | 0.1125 | 0.1525 | 0.1675 | 0.0625 | 0.0625 |
| No log | 33.72 | 68 | 1.3427 | 0.6725 | 0.115 | 0.1525 | 0.165 | 0.0625 | 0.0625 |
| No log | 34.72 | 70 | 1.3161 | 0.6825 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 35.72 | 72 | 1.2896 | 0.7025 | 0.115 | 0.1525 | 0.1675 | 0.0625 | 0.0625 |
| No log | 36.72 | 74 | 1.2720 | 0.705 | 0.11 | 0.1525 | 0.185 | 0.0625 | 0.0625 |
| No log | 37.72 | 76 | 1.2471 | 0.71 | 0.11 | 0.1525 | 0.1775 | 0.0625 | 0.0625 |
| No log | 38.72 | 78 | 1.2307 | 0.71 | 0.11 | 0.155 | 0.1775 | 0.0625 | 0.0625 |
| No log | 39.72 | 80 | 1.2174 | 0.7175 | 0.1125 | 0.155 | 0.1825 | 0.0625 | 0.0625 |
| No log | 40.72 | 82 | 1.1991 | 0.705 | 0.1125 | 0.1525 | 0.1775 | 0.0625 | 0.0625 |
| No log | 41.72 | 84 | 1.1867 | 0.71 | 0.1175 | 0.1525 | 0.18 | 0.065 | 0.0625 |
| No log | 42.72 | 86 | 1.1764 | 0.7025 | 0.115 | 0.1525 | 0.18 | 0.0675 | 0.0625 |
| No log | 43.72 | 88 | 1.1601 | 0.715 | 0.115 | 0.1525 | 0.1825 | 0.0725 | 0.0625 |
| No log | 44.72 | 90 | 1.1410 | 0.7175 | 0.115 | 0.1525 | 0.18 | 0.075 | 0.0625 |
| No log | 45.72 | 92 | 1.1408 | 0.71 | 0.115 | 0.155 | 0.1825 | 0.075 | 0.0625 |
| No log | 46.72 | 94 | 1.1443 | 0.7075 | 0.115 | 0.155 | 0.1825 | 0.0775 | 0.0625 |
| No log | 47.72 | 96 | 1.1364 | 0.705 | 0.115 | 0.155 | 0.1775 | 0.0825 | 0.0625 |
| No log | 48.72 | 98 | 1.1251 | 0.71 | 0.115 | 0.155 | 0.175 | 0.085 | 0.0625 |
| No log | 49.72 | 100 | 1.1113 | 0.7175 | 0.115 | 0.155 | 0.1775 | 0.085 | 0.0625 |
| No log | 50.72 | 102 | 1.1040 | 0.7175 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 |
| No log | 51.72 | 104 | 1.0972 | 0.715 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 |
| No log | 52.72 | 106 | 1.0938 | 0.7175 | 0.115 | 0.1575 | 0.1825 | 0.0875 | 0.0625 |
| No log | 53.72 | 108 | 1.0931 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
| No log | 54.72 | 110 | 1.0887 | 0.7075 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
| No log | 55.72 | 112 | 1.0865 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 56.72 | 114 | 1.0828 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 57.72 | 116 | 1.0801 | 0.7075 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 58.72 | 118 | 1.0786 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 |
| No log | 59.72 | 120 | 1.0783 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pszemraj/gpt2-medium-vaguely-human-dialogue | pszemraj | 2023-07-05T19:57:49Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
widget:
- text: |+
Do you like my new haircut?
person beta:
example_title: haircut
- text: |+
I love to learn new things.. are you willing to teach me something?
person beta:
example_title: teaching
- text: |+
What's your favorite animal? Mine is the dog?
person beta:
example_title: favorite
- text: |+
how much does it cost?
person beta:
example_title: money
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: true
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/gpt2-medium-vaguely-human-dialogue
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human.
It achieves the following results on the evaluation set:
- Loss: 4.3281
## Model description
- a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases.
## Intended uses & limitations
- there are no intended uses
## Training and evaluation data
- a parsed version of the wizard of Wikipedia dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 34.991 | 1.0 | 837 | 14.8359 |
| 12.2881 | 2.0 | 1674 | 9.375 |
| 8.5071 | 3.0 | 2511 | 7.2148 |
| 7.6031 | 4.0 | 3348 | 6.1758 |
| 6.4808 | 5.0 | 4185 | 5.5820 |
| 5.8562 | 6.0 | 5022 | 5.0977 |
| 5.6094 | 7.0 | 5859 | 4.8203 |
| 5.2591 | 8.0 | 6696 | 4.5977 |
| 5.0031 | 9.0 | 7533 | 4.4219 |
| 4.8837 | 10.0 | 8370 | 4.3281 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0 |
AWolters/ByT5_DutchSpellingNormalization | AWolters | 2023-07-05T19:53:42Z | 62 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"text2text generation",
"spelling normalization",
"19th-century Dutch",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-01T16:11:47Z | ---
language:
- nl
tags:
- text2text generation
- spelling normalization
- 19th-century Dutch
license: apache-2.0
---
# 19th Century Dutch Spelling Normalization
This repository contains a pretrained and finetuned model of the original __google/ByT5-small__.
This model has been pretrained and finetuned for the task of 19th-century Dutch spelling normalization.
We first pretrained the model with 2 million sentences from Dutch historical novels.
Afterward, we finetuned the model with a 10k dataset consisting of 19th-century Dutch sentences;
these sentences were automatically annotated by a rule-based system built for 19th-century Dutch spelling normalization (van Cranenburgh and van Noord, 2022).
The finetuned model is only available in the TensorFlow format but can be converted to a PyTorch environment.
The pretrained only weights are available in the PyTorch environment; note that this model has to be finetuned first.
The pretrained only weights are available in the directory __Pretrained_ByT5__.
The train and validation sets used for finetuning are available in the main repository.
For further information about the model, please see the [GitHub](https://github.com/Awolters123/Master-Thesis) repository.
## How to use:
```
from transformers import AutoTokenizer, TFT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained('AWolters/ByT5_DutchSpellingNormalization')
model = TFT5ForConditionalGeneration.from_pretrained('AWolters/ByT5_DutchSpellingNormalization')
text = 'De menschen waren aan het werk.'
tokenized = tokenizer(text, return_tensors='tf')
prediction = model.generate(input_ids=tokenized['input_ids'],
attention_mask=tokenized['attention_mask'],
max_new_tokens=100)
print(tokenizer.decode(prediction[0], text_target=True, skip_special_tokens=True))
```
## Setup:
The model has been finetuned with the following (hyper)parameters values:
_Learn rate_: 5e-5
_Batch size_: 32
_Optimizer_: AdamW
_Epochs_: 30, with earlystopping
To further finetune the model, use the __T5Trainer.py__ script. |
khushpreet/eyedisease | khushpreet | 2023-07-05T19:51:05Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"medical",
"image-classification",
"arxiv:1910.09700",
"region:us"
] | image-classification | 2023-07-05T19:48:02Z | ---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sebasvaron/my_awesome_model | sebasvaron | 2023-07-05T19:50:19Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T19:45:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/wsample.43a | aroot | 2023-07-05T19:38:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T18:34:22Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: wsample.43a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsample.43a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8306
- Bleu: 4.7146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
aroot/wsample.32a | aroot | 2023-07-05T19:38:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-05T18:34:12Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: wsample.32a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsample.32a
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8284
- Bleu: 4.7412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
joydragon/Reinforce-Pixelcopter-PLE-v0 | joydragon | 2023-07-05T19:14:01Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T18:30:19Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.40 +/- 19.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
wizofavalon/distilgpt2-finetuned-wikitext2 | wizofavalon | 2023-07-05T19:09:13Z | 62 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-05T18:56:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: wizofavalon/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wizofavalon/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8588
- Validation Loss: 3.6766
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8588 | 3.6766 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
maubers/emily_yeppers | maubers | 2023-07-05T19:08:47Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-05T17:28:43Z | ## Overview
This contains Emily Yeppers, a bot who likes to talk about very inappropriate things and how vital they are to the existence of our species (the truth, technically) using GPT-Neo. The bot streams new content from specified subreddits and responds when certain target phrases are detected in comments and submissions, or when it is mentioned or directly replied to.
She is designed to function as a Reddit bot. See the Github page for more information. She WILL generate inappropriate content, as she was trained on comments posted in inappropriate subreddits.
## Setup and Installation (for Reddit)
See https://github.com/maubers/emily_yeppers
|
jlpan/santacoder-finetuned-the-stack-bash | jlpan | 2023-07-05T19:03:38Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-27T21:51:19Z | ---
license: bigcode-openrail-m
tags:
- generated_from_trainer
model-index:
- name: santacoder-finetuned-the-stack-bash
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-bash
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6322 | 0.25 | 500 | 1.5498 |
| 3.9791 | 0.5 | 1000 | 1.4721 |
| 0.3946 | 0.75 | 1500 | 1.3971 |
| 1.5232 | 1.0 | 2000 | 1.3654 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
konverner/due_eshop_21_multilabel | konverner | 2023-07-05T18:59:21Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-04T22:21:59Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# konverner/due_eshop_21_multilabel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("konverner/due_eshop_21_multilabel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
bk6000/dqn-SpaceInvadersNoFrameskip-v4 | bk6000 | 2023-07-05T18:50:44Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T18:09:35Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 337.50 +/- 114.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bk6000 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bk6000 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bk6000
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
alesthehuman/poca-SoccerTwos | alesthehuman | 2023-07-05T18:14:32Z | 24 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-07-05T18:13:38Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: alesthehuman/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tubido/Taxi-v3-001 | Tubido | 2023-07-05T18:06:11Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T18:06:09Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-001
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Tubido/Taxi-v3-001", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v2.2.2 | PraveenJesu | 2023-07-05T18:01:17Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-05T18:01:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_ent_g75 | jordyvl | 2023-07-05T17:52:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T15:38:04Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_ent_g75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-05_ent_g75
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2759
- Accuracy: 0.6825
- Exit 0 Accuracy: 0.11
- Exit 1 Accuracy: 0.155
- Exit 2 Accuracy: 0.345
- Exit 3 Accuracy: 0.425
- Exit 4 Accuracy: 0.5225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7601 | 0.1075 | 0.0825 | 0.0675 | 0.1025 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7328 | 0.1125 | 0.07 | 0.065 | 0.1225 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6968 | 0.13 | 0.075 | 0.06 | 0.1325 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6594 | 0.18 | 0.075 | 0.06 | 0.1175 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6206 | 0.1925 | 0.085 | 0.0575 | 0.11 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5710 | 0.2125 | 0.09 | 0.08 | 0.115 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5334 | 0.2275 | 0.095 | 0.08 | 0.12 | 0.0575 | 0.0625 |
| No log | 7.72 | 16 | 2.5094 | 0.245 | 0.095 | 0.095 | 0.135 | 0.0725 | 0.0625 |
| No log | 8.72 | 18 | 2.4631 | 0.2825 | 0.095 | 0.0975 | 0.17 | 0.0925 | 0.065 |
| No log | 9.72 | 20 | 2.4152 | 0.3025 | 0.1 | 0.1275 | 0.205 | 0.1075 | 0.0625 |
| No log | 10.72 | 22 | 2.3737 | 0.325 | 0.1075 | 0.1225 | 0.24 | 0.12 | 0.065 |
| No log | 11.72 | 24 | 2.3302 | 0.3175 | 0.1125 | 0.1175 | 0.2375 | 0.1475 | 0.0675 |
| No log | 12.72 | 26 | 2.2746 | 0.34 | 0.1125 | 0.125 | 0.255 | 0.155 | 0.095 |
| No log | 13.72 | 28 | 2.2527 | 0.35 | 0.1125 | 0.125 | 0.2625 | 0.175 | 0.095 |
| No log | 14.72 | 30 | 2.2101 | 0.3425 | 0.1075 | 0.13 | 0.27 | 0.2125 | 0.095 |
| No log | 15.72 | 32 | 2.1811 | 0.355 | 0.1075 | 0.14 | 0.29 | 0.24 | 0.095 |
| No log | 16.72 | 34 | 2.1368 | 0.38 | 0.105 | 0.145 | 0.305 | 0.245 | 0.0925 |
| No log | 17.72 | 36 | 2.0855 | 0.395 | 0.1075 | 0.145 | 0.3175 | 0.2475 | 0.095 |
| No log | 18.72 | 38 | 2.0559 | 0.4 | 0.1125 | 0.145 | 0.305 | 0.255 | 0.1025 |
| No log | 19.72 | 40 | 2.0277 | 0.41 | 0.115 | 0.145 | 0.295 | 0.28 | 0.105 |
| No log | 20.72 | 42 | 1.9746 | 0.445 | 0.12 | 0.145 | 0.28 | 0.2875 | 0.1025 |
| No log | 21.72 | 44 | 1.9346 | 0.4525 | 0.12 | 0.145 | 0.265 | 0.25 | 0.1025 |
| No log | 22.72 | 46 | 1.8926 | 0.4925 | 0.12 | 0.145 | 0.255 | 0.25 | 0.11 |
| No log | 23.72 | 48 | 1.8581 | 0.5025 | 0.115 | 0.1475 | 0.26 | 0.3 | 0.1075 |
| No log | 24.72 | 50 | 1.8403 | 0.4975 | 0.11 | 0.1475 | 0.2725 | 0.3325 | 0.11 |
| No log | 25.72 | 52 | 1.8162 | 0.5 | 0.1125 | 0.1475 | 0.2875 | 0.3575 | 0.1125 |
| No log | 26.72 | 54 | 1.7562 | 0.5475 | 0.115 | 0.1475 | 0.295 | 0.3575 | 0.115 |
| No log | 27.72 | 56 | 1.7205 | 0.5725 | 0.1175 | 0.15 | 0.295 | 0.37 | 0.115 |
| No log | 28.72 | 58 | 1.7041 | 0.555 | 0.1175 | 0.15 | 0.295 | 0.36 | 0.11 |
| No log | 29.72 | 60 | 1.7018 | 0.5525 | 0.12 | 0.15 | 0.305 | 0.36 | 0.1125 |
| No log | 30.72 | 62 | 1.6532 | 0.58 | 0.12 | 0.15 | 0.3025 | 0.3725 | 0.1125 |
| No log | 31.72 | 64 | 1.6218 | 0.58 | 0.12 | 0.15 | 0.3125 | 0.3725 | 0.1175 |
| No log | 32.72 | 66 | 1.5888 | 0.59 | 0.115 | 0.1475 | 0.32 | 0.38 | 0.1325 |
| No log | 33.72 | 68 | 1.5778 | 0.6 | 0.115 | 0.1475 | 0.315 | 0.3875 | 0.1425 |
| No log | 34.72 | 70 | 1.5500 | 0.59 | 0.1225 | 0.15 | 0.315 | 0.3875 | 0.155 |
| No log | 35.72 | 72 | 1.5216 | 0.61 | 0.13 | 0.15 | 0.31 | 0.3875 | 0.17 |
| No log | 36.72 | 74 | 1.5024 | 0.6175 | 0.1275 | 0.15 | 0.3075 | 0.4125 | 0.1675 |
| No log | 37.72 | 76 | 1.4787 | 0.615 | 0.12 | 0.1525 | 0.32 | 0.4025 | 0.165 |
| No log | 38.72 | 78 | 1.4635 | 0.6175 | 0.1175 | 0.1525 | 0.325 | 0.4125 | 0.1625 |
| No log | 39.72 | 80 | 1.4455 | 0.6225 | 0.12 | 0.155 | 0.3225 | 0.4225 | 0.165 |
| No log | 40.72 | 82 | 1.4304 | 0.625 | 0.12 | 0.155 | 0.33 | 0.425 | 0.1675 |
| No log | 41.72 | 84 | 1.4170 | 0.6425 | 0.1175 | 0.155 | 0.3325 | 0.425 | 0.21 |
| No log | 42.72 | 86 | 1.4052 | 0.64 | 0.1175 | 0.155 | 0.335 | 0.4275 | 0.245 |
| No log | 43.72 | 88 | 1.3965 | 0.6425 | 0.1125 | 0.155 | 0.34 | 0.4125 | 0.2775 |
| No log | 44.72 | 90 | 1.3766 | 0.645 | 0.1125 | 0.155 | 0.3425 | 0.4075 | 0.315 |
| No log | 45.72 | 92 | 1.3611 | 0.6575 | 0.11 | 0.155 | 0.345 | 0.41 | 0.33 |
| No log | 46.72 | 94 | 1.3513 | 0.6575 | 0.11 | 0.155 | 0.3425 | 0.4175 | 0.34 |
| No log | 47.72 | 96 | 1.3520 | 0.665 | 0.11 | 0.155 | 0.3425 | 0.4275 | 0.36 |
| No log | 48.72 | 98 | 1.3373 | 0.67 | 0.11 | 0.155 | 0.3425 | 0.425 | 0.3875 |
| No log | 49.72 | 100 | 1.3213 | 0.6775 | 0.11 | 0.155 | 0.3425 | 0.4175 | 0.405 |
| No log | 50.72 | 102 | 1.3124 | 0.6825 | 0.11 | 0.155 | 0.3425 | 0.41 | 0.445 |
| No log | 51.72 | 104 | 1.3080 | 0.68 | 0.1075 | 0.155 | 0.34 | 0.41 | 0.455 |
| No log | 52.72 | 106 | 1.3037 | 0.675 | 0.105 | 0.1575 | 0.3425 | 0.4175 | 0.4775 |
| No log | 53.72 | 108 | 1.2987 | 0.6825 | 0.11 | 0.1575 | 0.345 | 0.425 | 0.4875 |
| No log | 54.72 | 110 | 1.2943 | 0.6775 | 0.1075 | 0.1575 | 0.3475 | 0.425 | 0.5025 |
| No log | 55.72 | 112 | 1.2889 | 0.68 | 0.1075 | 0.1575 | 0.3475 | 0.425 | 0.51 |
| No log | 56.72 | 114 | 1.2829 | 0.68 | 0.1075 | 0.16 | 0.3475 | 0.4225 | 0.5175 |
| No log | 57.72 | 116 | 1.2793 | 0.68 | 0.1075 | 0.155 | 0.3475 | 0.4225 | 0.5225 |
| No log | 58.72 | 118 | 1.2769 | 0.68 | 0.11 | 0.155 | 0.345 | 0.4225 | 0.5225 |
| No log | 59.72 | 120 | 1.2759 | 0.6825 | 0.11 | 0.155 | 0.345 | 0.425 | 0.5225 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jorgeortizfuentes/nominal-groups-recognition-bert-base-spanish-wwm-cased | jorgeortizfuentes | 2023-07-05T17:44:00Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:jorgeortizfuentes/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-05T06:42:53Z | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- jorgeortizfuentes/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the jorgeortizfuentes/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3568
- Ng Precision: 0.7280
- Ng Recall: 0.7767
- Ng F1: 0.7516
- Ng Number: 3198
- Overall Precision: 0.7280
- Overall Recall: 0.7767
- Overall F1: 0.7516
- Overall Accuracy: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ng Precision | Ng Recall | Ng F1 | Ng Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:------:|:---------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3955 | 1.0 | 228 | 0.2778 | 0.7129 | 0.7492 | 0.7306 | 3198 | 0.7129 | 0.7492 | 0.7306 | 0.8924 |
| 0.2186 | 2.0 | 456 | 0.2763 | 0.7318 | 0.7711 | 0.7509 | 3198 | 0.7318 | 0.7711 | 0.7509 | 0.8990 |
| 0.1586 | 3.0 | 684 | 0.2960 | 0.7274 | 0.7733 | 0.7496 | 3198 | 0.7274 | 0.7733 | 0.7496 | 0.8992 |
| 0.119 | 4.0 | 912 | 0.3330 | 0.7283 | 0.7727 | 0.7498 | 3198 | 0.7283 | 0.7727 | 0.7498 | 0.8982 |
| 0.0943 | 5.0 | 1140 | 0.3568 | 0.7280 | 0.7767 | 0.7516 | 3198 | 0.7280 | 0.7767 | 0.7516 | 0.8992 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cl-rarity-11-135k-mod-datasets-rarity1-root3 | NasimB | 2023-07-05T17:36:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-05T15:21:37Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cl-rarity-11-135k-mod-datasets-rarity1-root3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cl-rarity-11-135k-mod-datasets-rarity1-root3
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6744 | 0.05 | 500 | 5.8476 |
| 5.4179 | 0.11 | 1000 | 5.4438 |
| 5.0636 | 0.16 | 1500 | 5.2289 |
| 4.8263 | 0.21 | 2000 | 5.0980 |
| 4.6688 | 0.27 | 2500 | 5.0253 |
| 4.5377 | 0.32 | 3000 | 4.9573 |
| 4.427 | 0.37 | 3500 | 4.9024 |
| 4.3275 | 0.43 | 4000 | 4.8730 |
| 4.2279 | 0.48 | 4500 | 4.8377 |
| 4.1432 | 0.53 | 5000 | 4.8105 |
| 4.0517 | 0.59 | 5500 | 4.7917 |
| 3.9751 | 0.64 | 6000 | 4.7647 |
| 3.893 | 0.69 | 6500 | 4.7603 |
| 3.8238 | 0.74 | 7000 | 4.7474 |
| 3.771 | 0.8 | 7500 | 4.7374 |
| 3.7292 | 0.85 | 8000 | 4.7341 |
| 3.6984 | 0.9 | 8500 | 4.7283 |
| 3.6812 | 0.96 | 9000 | 4.7263 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Tubido/q-FrozenLake-v1-4x4-noSlippery | Tubido | 2023-07-05T17:28:01Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-05T17:27:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Tubido/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MysticShadow427/roberta-meld | MysticShadow427 | 2023-07-05T17:21:14Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-07-05T17:19:52Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
ddoc/def | ddoc | 2023-07-05T17:04:15Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-05T17:03:23Z |
# Deforum Stable Diffusion — official extension for AUTOMATIC1111's webui
<p align="left">
<a href="https://github.com/deforum-art/sd-webui-deforum/commits"><img alt="Last Commit" src="https://img.shields.io/github/last-commit/deforum-art/deforum-for-automatic1111-webui"></a>
<a href="https://github.com/deforum-art/sd-webui-deforum/issues"><img alt="GitHub issues" src="https://img.shields.io/github/issues/deforum-art/deforum-for-automatic1111-webui"></a>
<a href="https://github.com/deforum-art/sd-webui-deforum/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/deforum-art/deforum-for-automatic1111-webui"></a>
<a href="https://github.com/deforum-art/sd-webui-deforum/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/deforum-art/deforum-for-automatic1111-webui"></a>
</a>
</p>
## Need help? See our [FAQ](https://github.com/deforum-art/sd-webui-deforum/wiki/FAQ-&-Troubleshooting)
## Getting Started
1. Install [AUTOMATIC1111's webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/).
2. Now two ways: either clone the repo into the `extensions` directory via git commandline launched within in the `stable-diffusion-webui` folder
```sh
git clone https://github.com/deforum-art/sd-webui-deforum extensions/deforum
```
Or download this repository, locate the `extensions` folder within your WebUI installation, create a folder named `deforum` and put the contents of the downloaded directory inside of it. Then restart WebUI.
3. Open the webui, find the Deforum tab at the top of the page.
4. Enter the animation settings. Refer to [this general guide](https://docs.google.com/document/d/1pEobUknMFMkn8F5TMsv8qRzamXX_75BShMMXV8IFslI/edit) and [this guide to math keyframing functions in Deforum](https://docs.google.com/document/d/1pfW1PwbDIuW0cv-dnuyYj1UzPqe23BlSLTJsqazffXM/edit?usp=sharing). However, **in this version prompt weights less than zero don't just like in original Deforum!** Split the positive and the negative prompt in the json section using --neg argument like this "apple:\`where(cos(t)>=0, cos(t), 0)\`, snow --neg strawberry:\`where(cos(t)<0, -cos(t), 0)\`"
5. To view animation frames as they're being made, without waiting for the completion of an animation, go to the 'Settings' tab and set the value of this toolbar **above zero**. Warning: it may slow down the generation process.

6. Run the script and see if you got it working or even got something. **In 3D mode a large delay is expected at first** as the script loads the depth models. In the end, using the default settings the whole thing should consume 6.4 GBs of VRAM at 3D mode peaks and no more than 3.8 GB VRAM in 3D mode if you launch the webui with the '--lowvram' command line argument.
7. After the generation process is completed, click the button with the self-describing name to show the video or gif result right in the GUI!
8. Join our Discord where you can post generated stuff, ask questions and more: https://discord.gg/deforum. <br>
* There's also the 'Issues' tab in the repo, for well... reporting issues ;)
9. Profit!
## Known issues
* This port is not fully backward-compatible with the notebook and the local version both due to the changes in how AUTOMATIC1111's webui handles Stable Diffusion models and the changes in this script to get it to work in the new environment. *Expect* that you may not get exactly the same result or that the thing may break down because of the older settings.
## Screenshots
Amazing raw Deforum animation by [Pxl.Pshr](https://www.instagram.com/pxl.pshr):
* Turn Audio ON!
(Audio credits: SKRILLEX, FRED AGAIN & FLOWDAN - RUMBLE (PHACE'S DNB FLIP))
https://user-images.githubusercontent.com/121192995/224450647-39529b28-be04-4871-bb7a-faf7afda2ef2.mp4
Setting file of that video: [here](https://github.com/deforum-art/sd-webui-deforum/files/11353167/PxlPshrWinningAnimationSettings.txt).
<br>
Main extension tab:

Keyframes tab:

|
oknashar/distilbert-base-uncased-finetuned-emotion | oknashar | 2023-07-05T16:42:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T15:20:24Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an mteb/emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1803
- Accuracy: 0.94
- F1: 0.9400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5017 | 1.0 | 250 | 0.2116 | 0.9295 | 0.9305 |
| 0.1763 | 2.0 | 500 | 0.1617 | 0.936 | 0.9369 |
| 0.1267 | 3.0 | 750 | 0.1492 | 0.9385 | 0.9386 |
| 0.0979 | 4.0 | 1000 | 0.1495 | 0.9395 | 0.9392 |
| 0.0787 | 5.0 | 1250 | 0.1602 | 0.935 | 0.9349 |
| 0.067 | 6.0 | 1500 | 0.1588 | 0.9405 | 0.9401 |
| 0.0557 | 7.0 | 1750 | 0.1675 | 0.9415 | 0.9413 |
| 0.0452 | 8.0 | 2000 | 0.1764 | 0.937 | 0.9365 |
| 0.0375 | 9.0 | 2250 | 0.1765 | 0.9405 | 0.9406 |
| 0.0337 | 10.0 | 2500 | 0.1803 | 0.94 | 0.9400 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |
adisrini11/AIE-Assessment | adisrini11 | 2023-07-05T16:39:39Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-18T21:32:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: AIE-Assessment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: test
args: emotion
metrics:
- name: Accuracy
type: accuracy
value: 0.800844475721323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AIE-Assessment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5687
- Accuracy: 0.8008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 204 | 0.6383 | 0.7910 |
| No log | 2.0 | 408 | 0.5687 | 0.8008 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
Subsets and Splits