modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
susnato/distilhubert-finetuned-gtzan
susnato
2023-07-06T12:40:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-01T16:32:36Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.87 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5542 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0241 | 1.0 | 112 | 1.9155 | 0.4 | | 1.5443 | 2.0 | 225 | 1.2937 | 0.65 | | 1.1134 | 3.0 | 337 | 0.9665 | 0.71 | | 0.7215 | 4.0 | 450 | 0.8201 | 0.74 | | 0.4679 | 5.0 | 562 | 0.7616 | 0.75 | | 0.3626 | 6.0 | 675 | 0.5217 | 0.85 | | 0.1775 | 7.0 | 787 | 0.6748 | 0.81 | | 0.1642 | 8.0 | 900 | 0.5287 | 0.86 | | 0.0772 | 9.0 | 1012 | 0.5632 | 0.84 | | 0.0478 | 10.0 | 1125 | 0.5576 | 0.85 | | 0.0662 | 11.0 | 1237 | 0.5455 | 0.88 | | 0.0446 | 11.95 | 1344 | 0.5542 | 0.87 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
IslemTouati/french_model
IslemTouati
2023-07-06T12:29:54Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "camembert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-05-02T19:43:39Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # IslemTouati/french_model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("IslemTouati/french_model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
oliverguhr/spelling-correction-german-base
oliverguhr
2023-07-06T12:27:25Z
872
15
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-05-24T07:11:52Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-base-spelling-de results: [] widget: - text: "correct: ein dransformer isd ein mthode mit der ein compuder eine volge von zeichn übersetz" example_title: "1" - text: "correct: Dresten ist di Landeshaubtstadt des Freistaats Saksens und die zweid größte stadt des landel" example_title: "2" --- ## Model description This is a proof of concept spelling correction model for german. The model should fix your typos and punctuation. If you like to participate in the development or run your own experiments, have a look at [the GitHub repo](https://github.com/oliverguhr/spelling). Model Input: > ein dransformer isd ein mthode mit der ein compuder eine volge von zeichn in eine andrere folge von zeichen übersetzn kann dies kan zb genutzt werdne um text von einer spracge in eine andrere zu übersetzen Model Output: > Ein Transformer ist eine Methode, mit der ein Computer eine Folge von Zeichen in eine andere Folge von Zeichen übersetzen kann dies kann z.B. genutzt werden, um Texte von einer Sprache in eine andere zu übersetzen. ## Intended uses & limitations This is a work in progress, be aware that the model can produce artefacts. You can test the model using the pipeline-interface: ```python from transformers import pipeline fix_spelling = pipeline("text2text-generation",model="oliverguhr/spelling-correction-german-base") print(fix_spelling("correct: das idst ein neuZr test",max_length=256)) ```
Shahzebbb/ppo-LunarLander-v2
Shahzebbb
2023-07-06T12:23:42Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T12:23:28Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.61 +/- 16.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iulusoy/de_test_pipeline
iulusoy
2023-07-06T12:18:19Z
3
0
spacy
[ "spacy", "de", "license:mit", "region:us" ]
null
2023-03-24T09:52:13Z
--- tags: - spacy language: - de license: mit model-index: - name: de_test_pipeline results: [] --- A test pipeline for ModelManager testing purposes | Feature | Description | | --- | --- | | **Name** | `de_test_pipeline` | | **Version** | `0.1.0` | | **spaCy** | `>=3.5.3,<3.6.0` | | **Default Pipeline** | `tok2vec`, `spancat` | | **Components** | `tok2vec`, `spancat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `MIT` | | **Author** | [Liam Keegan](https://github.com/ssciwr/moralization) | ### Label Scheme <details> <summary>View label scheme (24 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`spancat`** | `Moralisierung explizit`, `Care`, `Institution`, `Forderer:in`, `Keine Moralisierung`, `Darstellung`, `Individuum`, `Neutral`, `Fairness`, `soziale Gruppe`, `Cheating`, `Appell`, `explizit`, `Moralisierung`, `Adresassat:in`, `Own Group`, `Moralisierung interpretativ`, `Benefizient:in`, `Other Group`, `Menschen`, `Oppression`, `Liberty`, `Harm`, `Kein Bezug` | </details> ### Accuracy | Type | Score | | --- | --- | | `SPANS_SC_F` | 0.04 | | `SPANS_SC_P` | 0.02 | | `SPANS_SC_R` | 36.36 | | `TOK2VEC_LOSS` | 50495.68 | | `SPANCAT_LOSS` | 1149148.34 |
Vtmpas/q-Taxi-v3-qeq
Vtmpas
2023-07-06T12:13:16Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T12:13:14Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-qeq results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Vtmpas/q-Taxi-v3-qeq", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Vtmpas/q-FrozenLake-v1-4x4-noSlippery
Vtmpas
2023-07-06T12:10:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T12:10:53Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Vtmpas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Srikanthr2/whisper-medium-sanskasr-37000-V1
Srikanthr2
2023-07-06T12:10:45Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "sa", "dataset:addy88/sanskrit-asr-84", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-21T06:37:43Z
--- language: - sa license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - addy88/sanskrit-asr-84 model-index: - name: whisper-medium-sanskasr-37000-V1-upload results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-sanskasr-37000-V1-upload This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the addy88/sanskrit-asr-84 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
maxkskhor/ppo-Pyramids
maxkskhor
2023-07-06T11:59:01Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-06T11:58:54Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: maxkskhor/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bugsys0302/undressav05
Bugsys0302
2023-07-06T11:56:51Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T11:52:15Z
--- license: creativeml-openrail-m ---
anujsahani01/finetuned_AI4Bharat_en_mr
anujsahani01
2023-07-06T11:55:30Z
108
0
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T01:54:41Z
--- license: mit tags: - generated_from_trainer model-index: - name: finetuned_AI4Bharat_en_mr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_AI4Bharat_en_mr This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 8000 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HasinMDG/UVC-Deberta-baseline
HasinMDG
2023-07-06T11:55:28Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "deberta-v2", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-06T11:55:11Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/UVC-Deberta-baseline This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/UVC-Deberta-baseline") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
HeshamMamdouh/arabart-v2-sum-fine-tuned
HeshamMamdouh
2023-07-06T11:48:04Z
61
0
transformers
[ "transformers", "tf", "mbart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T11:47:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: arabart-v2-sum-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # arabart-v2-sum-fine-tuned This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0702 - Validation Loss: 2.6760 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.9033 | 2.5054 | 0 | | 2.8269 | 2.5495 | 1 | | 2.7380 | 2.5277 | 2 | | 2.6357 | 2.5637 | 3 | | 2.6693 | 2.5888 | 4 | | 2.6560 | 2.5696 | 5 | | 2.5431 | 2.5702 | 6 | | 2.6064 | 2.5789 | 7 | | 2.5252 | 2.5570 | 8 | | 2.4553 | 2.6162 | 9 | | 2.4159 | 2.5925 | 10 | | 2.4123 | 2.6065 | 11 | | 2.3180 | 2.6371 | 12 | | 2.2778 | 2.6259 | 13 | | 2.2570 | 2.6121 | 14 | | 2.1909 | 2.6489 | 15 | | 2.1705 | 2.6390 | 16 | | 2.1008 | 2.6628 | 17 | | 2.0684 | 2.6929 | 18 | | 2.0702 | 2.6760 | 19 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.11.0 - Datasets 2.13.1 - Tokenizers 0.12.1
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent
jordyvl
2023-07-06T11:45:36Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T09:34:33Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_ent This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3493 - Accuracy: 0.645 - Exit 0 Accuracy: 0.1125 - Exit 1 Accuracy: 0.155 - Exit 2 Accuracy: 0.3775 - Exit 3 Accuracy: 0.5225 - Exit 4 Accuracy: 0.5875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7604 | 0.1075 | 0.09 | 0.0675 | 0.1075 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7329 | 0.1125 | 0.0725 | 0.065 | 0.13 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6989 | 0.1325 | 0.08 | 0.06 | 0.1375 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6608 | 0.17 | 0.08 | 0.0575 | 0.1375 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6201 | 0.19 | 0.09 | 0.0525 | 0.1175 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5813 | 0.2175 | 0.095 | 0.0825 | 0.1125 | 0.0675 | 0.0625 | | No log | 6.72 | 14 | 2.5503 | 0.215 | 0.0925 | 0.08 | 0.12 | 0.0825 | 0.0625 | | No log | 7.72 | 16 | 2.5289 | 0.23 | 0.09 | 0.0925 | 0.15 | 0.1025 | 0.0625 | | No log | 8.72 | 18 | 2.5344 | 0.245 | 0.0975 | 0.1 | 0.165 | 0.105 | 0.0675 | | No log | 9.72 | 20 | 2.5533 | 0.265 | 0.1 | 0.0975 | 0.185 | 0.09 | 0.1025 | | No log | 10.72 | 22 | 2.4567 | 0.29 | 0.0975 | 0.13 | 0.2 | 0.1 | 0.095 | | No log | 11.72 | 24 | 2.3982 | 0.3 | 0.1 | 0.12 | 0.205 | 0.1125 | 0.09 | | No log | 12.72 | 26 | 2.3722 | 0.3075 | 0.1025 | 0.1175 | 0.195 | 0.13 | 0.0825 | | No log | 13.72 | 28 | 2.3546 | 0.31 | 0.105 | 0.1225 | 0.1825 | 0.1425 | 0.085 | | No log | 14.72 | 30 | 2.3287 | 0.315 | 0.11 | 0.125 | 0.195 | 0.1775 | 0.095 | | No log | 15.72 | 32 | 2.2970 | 0.32 | 0.1075 | 0.13 | 0.2175 | 0.2275 | 0.1 | | No log | 16.72 | 34 | 2.2763 | 0.325 | 0.1075 | 0.14 | 0.225 | 0.2375 | 0.1075 | | No log | 17.72 | 36 | 2.3456 | 0.3075 | 0.105 | 0.14 | 0.2375 | 0.18 | 0.1275 | | No log | 18.72 | 38 | 2.3160 | 0.325 | 0.115 | 0.14 | 0.24 | 0.175 | 0.16 | | No log | 19.72 | 40 | 2.2257 | 0.33 | 0.1225 | 0.14 | 0.245 | 0.225 | 0.17 | | No log | 20.72 | 42 | 2.1769 | 0.355 | 0.125 | 0.1425 | 0.26 | 0.2725 | 0.135 | | No log | 21.72 | 44 | 2.1449 | 0.355 | 0.125 | 0.14 | 0.2725 | 0.3125 | 0.1175 | | No log | 22.72 | 46 | 2.1200 | 0.3675 | 0.125 | 0.1425 | 0.27 | 0.3125 | 0.115 | | No log | 23.72 | 48 | 2.0995 | 0.3725 | 0.1225 | 0.1425 | 0.2625 | 0.31 | 0.115 | | No log | 24.72 | 50 | 2.0769 | 0.3825 | 0.12 | 0.1425 | 0.2725 | 0.3375 | 0.1125 | | No log | 25.72 | 52 | 2.0473 | 0.3975 | 0.115 | 0.14 | 0.285 | 0.335 | 0.1325 | | No log | 26.72 | 54 | 2.0094 | 0.4075 | 0.115 | 0.14 | 0.2925 | 0.3075 | 0.1525 | | No log | 27.72 | 56 | 1.9660 | 0.435 | 0.1175 | 0.14 | 0.29 | 0.2725 | 0.21 | | No log | 28.72 | 58 | 1.9271 | 0.46 | 0.11 | 0.1425 | 0.3025 | 0.27 | 0.235 | | No log | 29.72 | 60 | 1.8910 | 0.4825 | 0.11 | 0.145 | 0.305 | 0.27 | 0.2525 | | No log | 30.72 | 62 | 1.8619 | 0.475 | 0.11 | 0.1475 | 0.3 | 0.2875 | 0.27 | | No log | 31.72 | 64 | 1.8215 | 0.5025 | 0.11 | 0.15 | 0.3025 | 0.305 | 0.325 | | No log | 32.72 | 66 | 1.7845 | 0.52 | 0.1125 | 0.15 | 0.3175 | 0.3225 | 0.3625 | | No log | 33.72 | 68 | 1.7509 | 0.5375 | 0.1125 | 0.15 | 0.325 | 0.3525 | 0.3975 | | No log | 34.72 | 70 | 1.7237 | 0.545 | 0.1075 | 0.15 | 0.3325 | 0.365 | 0.4275 | | No log | 35.72 | 72 | 1.6970 | 0.555 | 0.11 | 0.15 | 0.3275 | 0.4 | 0.4475 | | No log | 36.72 | 74 | 1.6512 | 0.57 | 0.1075 | 0.15 | 0.3225 | 0.4125 | 0.465 | | No log | 37.72 | 76 | 1.6212 | 0.5875 | 0.11 | 0.1525 | 0.3375 | 0.42 | 0.4775 | | No log | 38.72 | 78 | 1.5995 | 0.595 | 0.1125 | 0.15 | 0.34 | 0.4275 | 0.4975 | | No log | 39.72 | 80 | 1.5713 | 0.5925 | 0.115 | 0.15 | 0.35 | 0.4375 | 0.525 | | No log | 40.72 | 82 | 1.5551 | 0.5875 | 0.115 | 0.15 | 0.3525 | 0.4375 | 0.5325 | | No log | 41.72 | 84 | 1.5276 | 0.59 | 0.115 | 0.15 | 0.35 | 0.4575 | 0.5425 | | No log | 42.72 | 86 | 1.5050 | 0.5925 | 0.115 | 0.15 | 0.355 | 0.46 | 0.5425 | | No log | 43.72 | 88 | 1.4871 | 0.595 | 0.1125 | 0.1525 | 0.3625 | 0.47 | 0.5625 | | No log | 44.72 | 90 | 1.4712 | 0.5975 | 0.1125 | 0.1525 | 0.3675 | 0.4775 | 0.5525 | | No log | 45.72 | 92 | 1.4615 | 0.5975 | 0.1125 | 0.155 | 0.365 | 0.4825 | 0.555 | | No log | 46.72 | 94 | 1.4449 | 0.6075 | 0.1125 | 0.155 | 0.3625 | 0.4875 | 0.5575 | | No log | 47.72 | 96 | 1.4273 | 0.6175 | 0.1125 | 0.155 | 0.365 | 0.5025 | 0.565 | | No log | 48.72 | 98 | 1.4127 | 0.6225 | 0.1125 | 0.155 | 0.365 | 0.505 | 0.5725 | | No log | 49.72 | 100 | 1.4005 | 0.63 | 0.1125 | 0.155 | 0.3675 | 0.5125 | 0.575 | | No log | 50.72 | 102 | 1.3925 | 0.625 | 0.1125 | 0.155 | 0.37 | 0.5125 | 0.5725 | | No log | 51.72 | 104 | 1.3847 | 0.6325 | 0.1125 | 0.155 | 0.38 | 0.5175 | 0.57 | | No log | 52.72 | 106 | 1.3772 | 0.64 | 0.1125 | 0.155 | 0.38 | 0.515 | 0.57 | | No log | 53.72 | 108 | 1.3679 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5825 | | No log | 54.72 | 110 | 1.3595 | 0.6475 | 0.1125 | 0.155 | 0.3775 | 0.525 | 0.5825 | | No log | 55.72 | 112 | 1.3544 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.58 | | No log | 56.72 | 114 | 1.3515 | 0.6425 | 0.1125 | 0.155 | 0.375 | 0.52 | 0.5875 | | No log | 57.72 | 116 | 1.3500 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.52 | 0.5925 | | No log | 58.72 | 118 | 1.3495 | 0.6425 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.59 | | No log | 59.72 | 120 | 1.3493 | 0.645 | 0.1125 | 0.155 | 0.3775 | 0.5225 | 0.5875 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Tidzo/falcon-7b-esg-instruct
Tidzo
2023-07-06T11:40:34Z
36
0
peft
[ "peft", "climate", "text-generation", "en", "dataset:tiiuae/falcon-refinedweb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2023-06-22T04:42:23Z
--- library_name: peft license: apache-2.0 tags: - climate language: - en datasets: - tiiuae/falcon-refinedweb pipeline_tag: text-generation --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0
Ashish9947/open_llama_7b_tech_support
Ashish9947
2023-07-06T11:40:31Z
3
1
peft
[ "peft", "region:us" ]
null
2023-07-06T11:37:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
vineetsharma/speecht5_finetuned_voxpopuli_nl
vineetsharma
2023-07-06T11:38:52Z
75
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-06T08:55:10Z
--- license: mit tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5219 | 4.3 | 1000 | 0.4787 | | 0.5047 | 8.61 | 2000 | 0.4660 | | 0.4922 | 12.91 | 3000 | 0.4621 | | 0.4898 | 17.21 | 4000 | 0.4609 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Binaryy/llama_travel_test
Binaryy
2023-07-06T11:38:27Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-06T11:37:12Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Binaryy/blender-bot-distill-finetuned
Binaryy
2023-07-06T11:36:26Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "blenderbot", "text2text-generation", "code", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-17T16:39:36Z
--- license: apache-2.0 language: - en pipeline_tag: conversational tags: - code ---
Bugsys0302/undbob
Bugsys0302
2023-07-06T11:36:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T11:26:03Z
--- license: creativeml-openrail-m ---
NasimB/gpt2-concat-aochildes-len-16plus3k
NasimB
2023-07-06T11:23:19Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T09:25:04Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-aochildes-len-16plus3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-aochildes-len-16plus3k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.724 | 0.29 | 500 | 5.6363 | | 5.3775 | 0.59 | 1000 | 5.2004 | | 5.0346 | 0.88 | 1500 | 4.9510 | | 4.7464 | 1.18 | 2000 | 4.8047 | | 4.5856 | 1.47 | 2500 | 4.6783 | | 4.4827 | 1.77 | 3000 | 4.5731 | | 4.3449 | 2.06 | 3500 | 4.5046 | | 4.1625 | 2.36 | 4000 | 4.4513 | | 4.1272 | 2.65 | 4500 | 4.3964 | | 4.0896 | 2.95 | 5000 | 4.3426 | | 3.8678 | 3.24 | 5500 | 4.3447 | | 3.8287 | 3.54 | 6000 | 4.3129 | | 3.8096 | 3.83 | 6500 | 4.2830 | | 3.6796 | 4.12 | 7000 | 4.2909 | | 3.5376 | 4.42 | 7500 | 4.2842 | | 3.5279 | 4.71 | 8000 | 4.2744 | | 3.511 | 5.01 | 8500 | 4.2679 | | 3.3374 | 5.3 | 9000 | 4.2774 | | 3.3374 | 5.6 | 9500 | 4.2775 | | 3.3392 | 5.89 | 10000 | 4.2771 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
m-aliabbas1/dqn-SpaceInvadersNoFrameskip-v4
m-aliabbas1
2023-07-06T11:17:35Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T11:16:52Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 807.50 +/- 374.85 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga m-aliabbas1 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga m-aliabbas1 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
ddoc/pta2
ddoc
2023-07-06T11:16:48Z
0
0
null
[ "region:us" ]
null
2023-07-06T11:16:15Z
# stable-diffusion-webui-prompt-travel Travel between prompts in the latent space to make pseudo-animation, extension script for AUTOMATIC1111/stable-diffusion-webui. ---- <p align="left"> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/commits"><img alt="Last Commit" src="https://img.shields.io/github/last-commit/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/issues"><img alt="GitHub issues" src="https://img.shields.io/github/issues/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <a href="https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/Kahsolt/stable-diffusion-webui-prompt-travel"></a> <img alt="Language" src="https://img.shields.io/github/languages/top/Kahsolt/stable-diffusion-webui-prompt-travel"> <img alt="License" src="https://img.shields.io/github/license/Kahsolt/stable-diffusion-webui-prompt-travel"> <br/> </p> ![:stable-diffusion-webui-prompt-travel](https://count.getloli.com/get/@:stable-diffusion-webui-prompt-travel) Try interpolating on the hidden vectors of conditioning prompt to make seemingly-continuous image sequence, or let's say a pseudo-animation. 😀 Not only prompts! We also support non-prompt conditions, read => [README_ext.md](README_ext.md) ~ ⚠ 我们成立了插件反馈 QQ 群: 616795645 (赤狐屿),欢迎出建议、意见、报告bug等 (w ⚠ We have a QQ chat group (616795645) now, any suggestions, discussions and bug reports are highly wellllcome!! ℹ 实话不说,我想有可能通过这个来做ppt童话绘本<del>甚至本子</del>…… ℹ 聪明的用法:先手工盲搜两张好看的图 (只有prompt差异),然后再尝试在其间 travel :lolipop: ⚠ Remeber to check "Always save all generated images" on in the settings tab, otherwise "upscaling" and "saving intermediate images" would not work. ⚠ 记得在设置页勾选 “总是保存所有生成的图片”,否则 上采样 与 保存中间图片 将无法工作。 ### Change Log ⚪ Compatibility The latest version `v3.0` is synced & tested with: - [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): version `v1.4.0`, tag [v1.4.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.4.0) - [Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet): version `v1.1.229`, commit [eceeec7a7e](https://github.com/Mikubill/sd-webui-controlnet/commit/eceeec7a7e856867de56e26cae9f3e1076480344) ⚪ Features - 2023/07/05: `v3.0` re-impl core with sd-webui `v1.4.0` callbacks; this new implementation will be slower, but more compatible with other extensions - 2023/04/13: `v2.7` add RIFE to controlnet-travel, skip fusion (experimental) - 2023/03/31: `v2.6` add a tkinter [GUI](#run-each-time) for postprocess toolchain - 2023/03/30: `v2.5` add controlnet-travel script (experimental), interpolating between hint conditions **instead of prompts**, thx for the code base from [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - 2023/02/14: `v2.3` integrate basic function of [depth-image-io](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui) for depth2img models - 2023/01/27: `v2.2` add 'slerp' linear interpolation method - 2023/01/22: `v2.1` add experimental 'replace' mode again, it's not smooth interpolation - 2023/01/20: `v2.0` add optional external [post-processing pipeline](#post-processing-pipeline) to highly boost up smoothness, greate thx to [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) and [RIFE](https://github.com/nihui/rife-ncnn-vulkan)!! - 2023/01/16: `v1.5` add upscale options (issue #12); add 'embryo' genesis, reproducing idea of [stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation) except [FILM](https://github.com/google-research/frame-interpolation) support (issue #11) - 2023/01/12: `v1.4` remove 'replace' & 'grad' mode support, due to webui's code change - 2022/12/11: `v1.3` work in a more 'successive' way, idea borrowed from [deforum](https://github.com/deforum-art/deforum-for-automatic1111-webui) ('genesis' option) - 2022/11/14: `v1.2` walk by substituting token embedding ('replace' mode) - 2022/11/13: `v1.1` walk by optimizing condition ('grad' mode) - 2022/11/10: `v1.0` interpolate linearly on condition/uncondition ('linear' mode) ⚪ Fixups - 2023/07/05: sync sd-webui-controlnet to `v1.1.229` - 2023/04/30: update controlnet core to `v1.1.116` - 2023/03/29: `v2.4` bug fixes on script hook, now working correctly with extra networks & [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - 2023/01/31: keep up with webui's updates, (issue #14: `ImportError: cannot import name 'single_sample_to_image'`) - 2023/01/28: keep up with webui's updates, extra-networks rework - 2023/01/16: `v1.5` apply zero padding when condition length mismatch (issue #10: `RuntimeError: The size of tensor a (77) must match the size of tensor b (154) at non-singleton dimension 0`), typo in demo filename - 2023/01/12: `v1.4` keep up with webui's updates (issue #9: `AttributeError: 'FrozenCLIPEmbedderWithCustomWords' object has no attribute 'process_text'`) - 2022/12/13: `#bdd8bed` fixup no working when negative prompt is left empty (issue #6: `neg_prompts[-1] IndexError: List index out of range`) - 2022/11/27: `v1.2-fix2` keep up with webui's updates (error `ImportError: FrozenCLIPEmbedderWithCustomWords`) - 2022/11/20: `v1.2-fix1` keep up with webui's updates (error `AttributeError: p.all_negative_prompts[0]`) ⚠ this script will NOT probably support the schedule syntax (i.e.: `[prompt:prompt:number]`), because interpolate on mutable conditions requires sampler level tracing which is hard to maintain :( ⚠ this script will NOT probably work together with `hires.fix` due to some inner conceptual/logical conflict of `denoising_strength`, you can alternatively perform batch-upscale then batch-img2img. ### How it works? - input **multiple lines** in the prompt/negative-prompt box, each line is called a **stage** - generate images one by one, interpolating from one stage towards the next (batch configs are ignored) - gradually change the digested inputs between prompts - freeze all other settings (`steps`, `sampler`, `cfg factor`, `seed`, etc.) - note that only the major `seed` will be forcely fixed through all processes, you can still set `subseed = -1` to allow more variances - export a video! - follow [post-processing pipeline](#post-processing-pipeline) to get much better result 👌 ⚪ Txt2Img | sampler \ genesis | fixed | successive | embryo | | :-: | :-: | :-: | :-: | | Eular a | ![t2i-f-euler_a](img/t2i-f-euler_a.gif) | ![t2i-s-euler_a](img/t2i-s-euler_a.gif) | ![t2i-e-euler_a](img/t2i-e-euler_a.gif) | | DDIM | ![t2i-f-ddim](img/t2i-f-ddim.gif) | ![t2i-s-ddim](img/t2i-s-ddim.gif) | ![t2i-e-ddim](img/t2i-e-ddim.gif) | ⚪ Img2Img | sampler \ genesis | fixed | successive | embryo | | :-: | :-: | :-: | :-: | | Eular a | ![i2i-f-euler_a](img/i2i-f-euler_a.gif) | ![i2i-s-euler_a](img/i2i-s-euler_a.gif) | ![i2i-e-euler_a](img/i2i-e-euler_a.gif) | | DDIM | ![i2i-f-ddim](img/i2i-f-ddim.gif) | ![i2i-s-ddim](img/i2i-s-ddim.gif) | ![i2i-e-ddim](img/i2i-e-ddim.gif) | post-processing pipeline (case `i2i-f-ddim`): | w/o. post-processing | w/. post-processing | | :-: | :-: | | ![i2i-f-ddim](img/i2i-f-ddim.gif) | ![i2i-f-ddim-pp](img/i2i-f-ddim-pp.gif) | other stuff: | reference image for img2img | embryo image decoded <br/> case `i2i-e-euler_a` with `embryo_step=8` | | :-: | :-: | | ![i2i-ref](img/i2i-ref.png) | ![embryo](img/embryo.png) | ⚪ ControlNet support | prompt-travel with ControlNet (depth) | controlnet-travel (depth) | | :-: | :-: | | ![ctrlnet-ref](img/ctrlnet-ref.gif) | ![ctrlnet-depth](img/ctrlnet-depth.gif) | Example above run configure: ```text Prompt: (((masterpiece))), highres, ((boy)), child, cat ears, white hair, red eyes, yellow bell, red cloak, barefoot, angel, [flying], egyptian ((masterpiece)), highres, ((girl)), loli, cat ears, light blue hair, red eyes, magical wand, barefoot, [running] Negative prompt: (((nsfw))), ugly,duplicate,morbid,mutilated,tranny,trans,trannsexual,mutation,deformed,long neck,bad anatomy,bad proportions,extra arms,extra legs, disfigured,more than 2 nipples,malformed,mutated,hermaphrodite,out of frame,extra limbs,missing arms,missing legs,poorly drawn hands,poorty drawn face,mutation,poorly drawn,long body,multiple breasts,cloned face,gross proportions, mutated hands,bad hands,bad feet,long neck,missing limb,malformed limbs,malformed hands,fused fingers,too many fingers,extra fingers,missing fingers,extra digit,fewer digits,mutated hands and fingers,lowres,text,error,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,text font ufemale focus, poorly drawn, deformed, poorly drawn face, (extra leg:1.3), (extra fingers:1.2),out of frame Steps: 15 CFG scale: 7 Clip skip: 1 Seed: 114514 Size: 512 x 512 Model hash: animefull-final-pruned.ckpt Hypernet: (this is my secret :) ``` ### Options - prompt: (list of strings) - negative prompt: (list of strings) - input multiple lines of prompt text - we call each line of prompt a stage, usually you need at least 2 lines of text to starts travel - if len(positive_prompts) != len(negative_prompts), the shorter one's last item will be repeated to match the longer one - mode: (categorical) - `linear`: linear interpolation on condition/uncondition of CLIP output - `replace`: gradually replace of CLIP output - replace_dim: (categorical) - `token`: per token-wise vector - `channel`: per channel-wise vector - `random`: per point-wise element - replace_order: (categorical) - `similiar`: from the most similiar first (L1 distance) - `different`: from the most different first - `random`: just randomly - `embryo`: pre-denoise few steps, then hatch a set of image from the common embryo by linear interpolation - steps: (int, list of int) - number of images to interpolate between two stages - if int, constant number of travel steps - if list of int, length should match `len(stages)-1`, separate by comma, e.g.: `12, 24, 36` - genesis: (categorical), the a prior for each image frame - `fixed`: starts from pure noise in txt2img pipeline, or from the same ref-image given in img2img pipeline - `successive`: starts from the last generated image (this will force txt2img turn to actually be img2img from the 2nd frame on) - `embryo`: starts from the same half-denoised image, see [=> How does it work?](https://replicate.com/andreasjansson/stable-diffusion-animation#readme) - (experimental) it only processes 2 lines of prompts, and does not interpolate on negative_prompt :( - genesis_extra_params - denoise_strength: (float), denoise strength in img2img pipelines (for `successive`) - embryo_step: (int or float), steps to hatch the common embryo (for `embryo`) - if >= 1, taken as step cout - if < 1, taken as ratio of total step - video_* - fps: (float), FPS of video, set `0` to disable file saving - fmt: (categorical), export video file format - pad: (int), repeat beginning/ending frames, giving a in/out time - pick: (string), cherry pick frames by [python slice syntax](https://www.pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python) before padding (e.g.: set `::2` to get only even frames, set `:-1` to drop last frame) ### Installation Easiest way to install it is to: 1. Go to the "Extensions" tab in the webui, switch to the "Install from URL" tab 2. Paste https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel.git into "URL for extension's git repository" and click install 3. (Optional) You will need to restart the webui for dependencies to be installed or you won't be able to generate video files Manual install: 1. Copy this repo folder to the 'extensions' folder of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2. (Optional) Restart the webui ### Post-processing pipeline There are still two steps away from a really smooth and high resolution animation, namely image **super-resolution** & video **frame interpolation** (see `third-party tools` below). ⚠ Media data processing is intrinsic resource-exhausting, and it's also not webui's work or duty, hence we separated it out. 😃 #### setup once ⚪ auto install (Windows) - run `cd tools & install.cmd` - trouble shooting - if you got any file system access errors like `Access denied.`, try run it again until you see `Done!` without errors 😂 - if you got SSL errors about `curl schannel ... Unknown error ... certificate.`, the downloader not work due to some SSL security reasons, just turn to install manually... - you will have four components: [Busybox](https://frippery.org/busybox/), [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan), [RIFE](https://github.com/nihui/rife-ncnn-vulkan) and [FFmpeg](https://ffmpeg.org/) installed under the [tools](tools) folder ⚪ manually install (Windows/Linux/Mac) ℹ Understand the `tools` folder layout first => [tools/README.txt](tools/README.txt) ℹ If you indeed wanna put the tools elsewhere, modify paths in [tools/link.cmd](tools/link.cmd) and run `cd tools & link.cmd` 😉 For Windows: - download [Busybox](https://frippery.org/busybox/) - download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) (e.g.: `realesrgan-ncnn-vulkan-20220424-windows.zip`) - (optional) download interesting seperated model checkpoints (e.g.: `realesr-animevideov3.pth`) - download [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases) bundle (e.g.: `rife-ncnn-vulkan-20221029-windows.zip `) - download [FFmpeg](https://ffmpeg.org/download.html) binary (e.g.: `ffmpeg-release-full-shared.7z` or `ffmpeg-git-full.7z`) For Linux/Mac: - download [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases) and [rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan/releases), put them according to the `tools` folder layout, manually apply `chmod 755` to the executables - `ffmpeg` can be easily found in your app store or package manager, run like `apt install ffmpeg`; DO NOT need to link it under `tools` folder #### run each time ⚪ tkinter GUI (Windows/Linux/Mac) ![manager](img/manager.png) For Windows: - run `manager.cmd`, to start webui's python venv - run the [DOSKEY](https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/doskey) `install` (only setup once) - run the DOSKEY `run` For Linux/Mac: - run `../../venv/Scripts/activate`, to start webui's python venv - run `pip install -r requirements.txt` (only setup once) - run `python manager.py` ℹ find usage help message in right click pop menu~ ⚪ <del> cmd script (Windows) - deprecated </del> - check params in [postprocess-config.cmd](postprocess-config.cmd) - pick one way to start 😃 - run `postprocess.cmd path/to/<image_folder>` from command line - drag & drop any image folder over `postprocess.cmd` icon - once processing finished, the explorer will be auto lauched to locate the generated file named with `synth.mp4` ### Related Projects ⚪ extensions that inspired this repo - sd-webui-controlnet (various image conditions): [https://github.com/Mikubill/sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) - depth-image-io (custom depth2img): [https://github.com/AnonymousCervine/depth-image-io-for-SDWebui](https://github.com/AnonymousCervine/depth-image-io-for-SDWebui) - animator (img2img): [https://github.com/Animator-Anon/animator_extension](https://github.com/Animator-Anon/animator_extension) - sd-webui-riffusion (music gen): [https://github.com/enlyth/sd-webui-riffusion](https://github.com/enlyth/sd-webui-riffusion) - sd-animation (half denoise + FILM): - Github: [https://github.com/andreasjansson/cog-stable-diffusion](https://github.com/andreasjansson/cog-stable-diffusion) - Replicate: [https://replicate.com/andreasjansson/stable-diffusion-animation](https://replicate.com/andreasjansson/stable-diffusion-animation) - deforum (img2img + depth model): [https://github.com/deforum-art/deforum-for-automatic1111-webui](https://github.com/deforum-art/deforum-for-automatic1111-webui) - seed-travel (varying seed): [https://github.com/yownas/seed_travel](https://github.com/yownas/seed_travel) ⚪ third-party tools - image super-resoultion - ESRGAN: - ESRGAN: [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN) - Real-ESRGAN: [https://github.com/xinntao/Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) - Real-ESRGAN-ncnn-vulkan (recommended): [https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan](https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan) - video frame interpolation - FILM (recommended): [https://github.com/google-research/frame-interpolation](https://github.com/google-research/frame-interpolation) - RIFE: - ECCV2022-RIFE: [https://github.com/megvii-research/ECCV2022-RIFE](https://github.com/megvii-research/ECCV2022-RIFE) - rife-ncnn-vulkan (recommended): [https://github.com/nihui/rife-ncnn-vulkan](https://github.com/nihui/rife-ncnn-vulkan) - Squirrel-RIFE: [https://github.com/Justin62628/Squirrel-RIFE](https://github.com/Justin62628/Squirrel-RIFE) - Practical-RIFE: [https://github.com/hzwer/Practical-RIFE](https://github.com/hzwer/Practical-RIFE) - GNU tool-kits - BusyBox: [https://www.busybox.net/](https://www.busybox.net/) - BusyBox for Windows: [https://frippery.org/busybox/](https://frippery.org/busybox/) - FFmpeg: [https://ffmpeg.org/](https://ffmpeg.org/) ⚪ my other experimental toy extensions - vid2vid (video2video) [https://github.com/Kahsolt/stable-diffusion-webui-vid2vid](https://github.com/Kahsolt/stable-diffusion-webui-vid2vid) - hires-fix-progressive (a progressive version of hires.fix): [https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive](https://github.com/Kahsolt/stable-diffusion-webui-hires-fix-progressive) - sonar (k_diffuison samplers): [https://github.com/Kahsolt/stable-diffusion-webui-sonar](https://github.com/Kahsolt/stable-diffusion-webui-sonar) - size-travel (kind of X-Y plot on image size): [https://github.com/Kahsolt/stable-diffusion-webui-size-travel](https://github.com/Kahsolt/stable-diffusion-webui-size-travel) ---- by Armit 2022/11/10
maxkskhor/ppo-SnowballTarget
maxkskhor
2023-07-06T11:09:41Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-06T11:09:35Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: maxkskhor/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Zain6699/intent-classifier-common_ground
Zain6699
2023-07-06T11:02:19Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T11:00:57Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-common_ground results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-common_ground This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0360 - Accuracy: 0.9938 - F1: 0.9825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
arham061/finance-alpaca
arham061
2023-07-06T11:01:10Z
134
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T10:26:35Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: finance-alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finance-alpaca This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Zain6699/intent-classifier-call_to_action
Zain6699
2023-07-06T11:00:48Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:59:26Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-call_to_action results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-call_to_action This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0810 - Accuracy: 0.9875 - F1: 0.9639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
RogerB/afro-xlmr-base-finetuned-kintweetsB
RogerB
2023-07-06T10:59:26Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T09:53:42Z
--- license: mit tags: - generated_from_trainer model-index: - name: afro-xlmr-base-finetuned-kintweetsB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-base-finetuned-kintweetsB This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4711 | 1.0 | 900 | 2.2431 | | 2.3238 | 2.0 | 1800 | 2.2116 | | 2.2725 | 3.0 | 2700 | 2.1590 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Zain6699/intent-classifier-personalization
Zain6699
2023-07-06T10:59:17Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:57:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-personalization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-personalization This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0776 - Accuracy: 0.9833 - F1: 0.9420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Zain6699/intent-classifier-incentive_for_connecting
Zain6699
2023-07-06T10:57:47Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:56:25Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-incentive_for_connecting results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-incentive_for_connecting This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0398 - Accuracy: 0.9917 - F1: 0.9740 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HeshamMamdouh/mt5-small-sum-fine-tuned
HeshamMamdouh
2023-07-06T10:56:23Z
61
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T10:54:24Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mt5-small-sum-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-sum-fine-tuned This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.4015 - Validation Loss: 1.8725 - Epoch: 74 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 52.1786 | 49.3355 | 0 | | 47.3638 | 45.1305 | 1 | | 43.6563 | 42.4522 | 2 | | 41.1214 | 39.5774 | 3 | | 38.3601 | 37.3437 | 4 | | 35.8017 | 34.8478 | 5 | | 32.6174 | 32.5370 | 6 | | 30.4399 | 30.7220 | 7 | | 28.8299 | 29.1744 | 8 | | 27.1342 | 26.7656 | 9 | | 25.2765 | 24.9835 | 10 | | 23.8467 | 23.1296 | 11 | | 22.4239 | 21.5926 | 12 | | 21.1438 | 20.8646 | 13 | | 20.5646 | 21.1405 | 14 | | 18.9753 | 20.3101 | 15 | | 18.8306 | 19.6189 | 16 | | 17.6935 | 18.5195 | 17 | | 17.0993 | 17.4238 | 18 | | 16.1595 | 16.1143 | 19 | | 15.4946 | 15.2814 | 20 | | 15.0521 | 14.1193 | 21 | | 14.1677 | 13.0559 | 22 | | 13.7239 | 12.5135 | 23 | | 12.8212 | 11.2606 | 24 | | 12.3333 | 10.5911 | 25 | | 11.5663 | 9.7681 | 26 | | 11.2357 | 9.7545 | 27 | | 10.3757 | 8.6039 | 28 | | 10.2910 | 8.3155 | 29 | | 9.5480 | 7.9911 | 30 | | 9.1881 | 7.5866 | 31 | | 8.7798 | 7.2611 | 32 | | 8.1529 | 6.9730 | 33 | | 7.7057 | 6.6302 | 34 | | 7.6724 | 6.2149 | 35 | | 7.1820 | 5.9264 | 36 | | 6.8348 | 5.9113 | 37 | | 6.6185 | 5.7169 | 38 | | 6.3897 | 5.2028 | 39 | | 6.0808 | 4.8902 | 40 | | 6.0517 | 4.5248 | 41 | | 5.4217 | 4.1892 | 42 | | 5.2464 | 4.1719 | 43 | | 5.0986 | 4.1922 | 44 | | 4.6939 | 3.9863 | 45 | | 4.7763 | 3.7674 | 46 | | 4.5684 | 3.4746 | 47 | | 4.2996 | 3.1692 | 48 | | 4.3434 | 3.0116 | 49 | | 4.1290 | 2.9261 | 50 | | 3.8491 | 2.8621 | 51 | | 4.0837 | 2.7301 | 52 | | 3.7118 | 2.6694 | 53 | | 3.6294 | 2.6649 | 54 | | 3.5421 | 2.6036 | 55 | | 3.3884 | 2.8563 | 56 | | 3.3752 | 2.4984 | 57 | | 3.4596 | 2.4091 | 58 | | 3.2075 | 2.4850 | 59 | | 3.2646 | 2.3415 | 60 | | 2.9473 | 2.3363 | 61 | | 2.9364 | 2.2778 | 62 | | 2.9130 | 2.2466 | 63 | | 2.8123 | 2.1061 | 64 | | 2.9697 | 2.1859 | 65 | | 2.9565 | 2.0596 | 66 | | 2.7610 | 2.2746 | 67 | | 2.7636 | 2.2090 | 68 | | 2.5776 | 2.0910 | 69 | | 2.5245 | 1.9330 | 70 | | 2.5848 | 1.9169 | 71 | | 2.4724 | 1.8993 | 72 | | 2.6195 | 1.8815 | 73 | | 2.4015 | 1.8725 | 74 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.11.0 - Datasets 2.13.1 - Tokenizers 0.12.1
cerindam30/tugas_akhir
cerindam30
2023-07-06T10:56:16Z
30
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-02T08:20:21Z
--- license: mit tags: - generated_from_trainer model-index: - name: tugas_akhir results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tugas_akhir This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - label_smoothing_factor: 0.1 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Zain6699/intent-classifier-flattery
Zain6699
2023-07-06T10:56:16Z
120
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T10:54:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: intent-classifier-flattery results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # intent-classifier-flattery This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Accuracy: 0.9917 - F1: 0.9747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nikolamilosevic/distil_bert_uncased-finetuned-relations
nikolamilosevic
2023-07-06T10:55:05Z
152
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T11:08:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - f1 model-index: - name: distil_bert_uncased-finetuned-relations results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil_bert_uncased-finetuned-relations This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4191 - Accuracy: 0.8866 - Prec: 0.8771 - Recall: 0.8866 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:| | 1.1823 | 1.0 | 232 | 0.5940 | 0.8413 | 0.8273 | 0.8413 | 0.8224 | | 0.4591 | 2.0 | 464 | 0.4600 | 0.8607 | 0.8539 | 0.8607 | 0.8555 | | 0.3106 | 3.0 | 696 | 0.4160 | 0.8812 | 0.8763 | 0.8812 | 0.8785 | | 0.246 | 4.0 | 928 | 0.4113 | 0.8834 | 0.8766 | 0.8834 | 0.8796 | | 0.2013 | 5.0 | 1160 | 0.4191 | 0.8866 | 0.8771 | 0.8866 | 0.8808 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.13.0.dev20220614 - Datasets 2.2.2 - Tokenizers 0.11.6
linlinlin/peft-fine-tuning
linlinlin
2023-07-06T10:54:57Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-06T10:31:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: peft-fine-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-fine-tuning This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - Transformers 4.27.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
linlinlin/full-fine-tuning
linlinlin
2023-07-06T10:53:14Z
180
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-06T10:22:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: full-fine-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full-fine-tuning This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 50 ### Training results ### Framework versions - Transformers 4.27.2 - Pytorch 2.0.1+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
zzzAI19/niji-LoRA_v2.0
zzzAI19
2023-07-06T10:50:30Z
0
6
null
[ "region:us" ]
null
2023-07-05T12:57:22Z
(7/6)Uploaded chilled_remix version. Additional learning was done with illustrations generated by niji-journey to create LoRA. The trigger word is "jis". It is based on various models. Files are different for each base model. niji2:Own merge model zzzmix(https://huggingface.co/zzzAI19/zzzmix) niji2animekawa:AnimeKawa (https://civitai.com/models/87661/animekawa?modelVersionId=93295) niji2anything:Anything v5 (https://civitai.com/models/9409/or-anything-v5ink) niji2beautifulRealistic:Beautiful Realistic Asians (https://civitai.com/models/25494/brabeautiful-realistic-asians-v2) niji2chilooutmix:chilloutmix (https://civitai.com/models/6424/chilloutmix) niji2counterfeit:counterfeit v3 (https://huggingface.co/gsdf/Counterfeit-V3.0) niji2sukumizumix:SukumizuMix (https://huggingface.co/AkariH/SukumizuMix) niji2chilledremix、niji2chilledreversemix:chilled_remix(https://huggingface.co/sazyou-roukaku/chilled_remix) I also plan to use TrinArt, Irismix, and openjourney as base models in the future. LoRA with these models as base models will be uploaded tomorrow. I would recommend a LoRA strength of 0.7. We also consider step 6 to be a good choice. Sample images can be found at https://ai-drawing.net/en/2023/07/05/introduction-of-niji-lora-v2-0/ (7/6)chilled_remix版をアップロードしました。 niji・journeyにより生成されたイラストで追加学習し、LoRAを作りました。トリガーワードは「jis」です。 色々なモデルをベースにしています。ベースモデルごとにファイルが異なります。 niji2:自作マージモデルzzzmix(https://huggingface.co/zzzAI19/zzzmix) niji2animekawa:AnimeKawa (https://civitai.com/models/87661/animekawa?modelVersionId=93295) niji2anything:Anything v5 (https://civitai.com/models/9409/or-anything-v5ink) niji2beautifulRealistic:Beautiful Realistic Asians (https://civitai.com/models/25494/brabeautiful-realistic-asians-v2) niji2chilooutmix:chilloutmix (https://civitai.com/models/6424/chilloutmix) niji2counterfeit:counterfeit v3 (https://huggingface.co/gsdf/Counterfeit-V3.0) niji2sukumizumix:SukumizuMix (https://huggingface.co/AkariH/SukumizuMix) niji2chilledremix、niji2chilledreversemix:chilled_remix(https://huggingface.co/sazyou-roukaku/chilled_remix) また今後、TrinArt、Irismix、openjourneyもベースモデルにする予定です。 これらのモデルをベースモデルとしたLoRAは明日アップロードする予定です。 LoRA強度は0.7を推奨します。 またステップ6が良いと考えます。 サンプル画像は以下のページにあります。 https://ai-drawing.net/2023/07/05/niji-lora-v2-0%e3%81%ae%e7%b4%b9%e4%bb%8b/ --- license: creativeml-openrail-m ---
isaachong127/gpt2_chinese_with_personal_qqchat_data
isaachong127
2023-07-06T10:43:00Z
131
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-05T20:01:00Z
--- license: apache-2.0 language: - zh library_name: transformers --- # intro 1. 1.38G的中文私人QQ群聊天记录语料 2. 1400万个tokens 3. 一张3060显卡训练17小时 个人首次尝试训练人工智能模型,学习训练GPT2模型,仅供参考。 交互结果仅供参考,本模型不对结果的合法性和合理性做保证, # Link [从头开始训练因果语言模型](https://huggingface.co/course/zh-CN/chapter7/6?fw=pt) # infer code ```python from transformers import GPT2LMHeadModel, AutoTokenizer model_name_or_path = "isaachong127/gpt2_chinese_with_personal_qqchat_data"#"checkpoint-16000" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) # add the EOS token as PAD token to avoid warnings model = GPT2LMHeadModel.from_pretrained(model_name_or_path, pad_token_id=tokenizer.eos_token_id) ``` ```python txt = """\ 今天 """ # encode context the generation is conditioned on input_ids = tokenizer.encode(txt, return_tensors='pt') # set no_repeat_ngram_size to 2 beam_output = model.generate( input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2, early_stopping=True ) print("Output:\n" + 50 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ```bash Output: ---------------------------------------------------------------------------------------------------- 今天 已 经 是 你 的 第 667 次 签 到 啦 ~ 纱 雾 酱 对 乃 的 好 感 度 [ + 10 ] 2021 年 , 要 加 油 哦 ~ ','签 到 ','@ \ u202e ```
qwopqwop/danbooru-llama-qlora
qwopqwop
2023-07-06T10:38:48Z
0
4
null
[ "license:mit", "region:us" ]
null
2023-07-06T10:25:19Z
--- license: mit --- train code: https://github.com/qwopqwop200/llama-danbooru-qlora
cardiffnlp/twitter-roberta-base-hate-multiclass-latest
cardiffnlp
2023-07-06T10:37:08Z
136
2
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "en", "arxiv:2307.01680", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-09T22:40:52Z
--- model-index: - name: twitter-roberta-base-hate-multiclass-latest results: [] language: - en pipeline_tag: text-classification --- # cardiffnlp/twitter-roberta-base-hate-multiclass-latest This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) for multiclass hate-speech classification. A combination of 13 different hate-speech datasets in the English language were used to fine-tune the model. ## Classes available ``` { "sexism": 0, "racism": 1, "disability": 2, "sexual_orientation": 3, "religion": 4, "other": 5, "not_hate":6 } ``` ## Following metrics are achieved * Accuracy: 0.9419 * Macro-F1: 0.5752 * Weighted-F1: 0.9390 ### Usage Install tweetnlp via pip. ```shell pip install tweetnlp ``` Load the model in python. ```python import tweetnlp model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-hate-latest") model.predict('Women are trash 2.') >> {'label': 'sexism'} model.predict('@user dear mongoloid respect sentiments & belief refrain totalitarianism. @user') >> {'label': 'disability'} ``` ### Model based on: ``` @misc{antypas2023robust, title={Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation}, author={Dimosthenis Antypas and Jose Camacho-Collados}, year={2023}, eprint={2307.01680}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
bofenghuang/asr-wav2vec2-ctc-french
bofenghuang
2023-07-06T10:34:26Z
429
12
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "CTC", "Wav2vec2", "fr", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_11_0", "dataset:facebook/multilingual_librispeech", "dataset:facebook/voxpopuli", "dataset:gigant/african_accented_french", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-25T15:33:14Z
--- license: apache-2.0 language: fr library_name: transformers thumbnail: null tags: - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event - CTC - Wav2vec2 datasets: - common_voice - mozilla-foundation/common_voice_11_0 - facebook/multilingual_librispeech - facebook/voxpopuli - gigant/african_accented_french metrics: - wer model-index: - name: Fine-tuned wav2vec2-FR-7K-large model for ASR in French results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: fr metrics: - name: Test WER type: wer value: 11.44 - name: Test WER (+LM) type: wer value: 9.66 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Multilingual LibriSpeech (MLS) type: facebook/multilingual_librispeech args: french metrics: - name: Test WER type: wer value: 5.93 - name: Test WER (+LM) type: wer value: 5.13 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: VoxPopuli type: facebook/voxpopuli args: fr metrics: - name: Test WER type: wer value: 9.33 - name: Test WER (+LM) type: wer value: 8.51 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: African Accented French type: gigant/african_accented_french args: fr metrics: - name: Test WER type: wer value: 16.22 - name: Test WER (+LM) type: wer value: 15.39 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Test WER type: wer value: 16.56 - name: Test WER (+LM) type: wer value: 12.96 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Fleurs type: google/fleurs args: fr_fr metrics: - name: Test WER type: wer value: 10.10 - name: Test WER (+LM) type: wer value: 8.84 --- # Fine-tuned wav2vec2-FR-7K-large model for ASR in French <style> img { display: inline; } </style> ![Model architecture](https://img.shields.io/badge/Model_Architecture-Wav2Vec2--CTC-lightgrey) ![Model size](https://img.shields.io/badge/Params-315M-lightgrey) ![Language](https://img.shields.io/badge/Language-French-lightgrey) This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large), trained on a composite dataset comprising of over 2200 hours of French speech audio, using the train and validation splits of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0), [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech), [Voxpopuli](https://github.com/facebookresearch/voxpopuli), [Multilingual TEDx](http://www.openslr.org/100), [MediaSpeech](https://www.openslr.org/108), and [African Accented French](https://huggingface.co/datasets/gigant/african_accented_french). When using the model make sure that your speech input is also sampled at 16Khz. ## Usage 1. To use on a local audio file with the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForCTC.from_pretrained("bhuang/asr-wav2vec2-french").to(device) processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("bhuang/asr-wav2vec2-french") model_sample_rate = processor_with_lm.feature_extractor.sampling_rate wav_path = "example.wav" # path to your audio file waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != model_sample_rate: resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate) waveform = resampler(waveform) # normalize input_dict = processor_with_lm(waveform, sampling_rate=model_sample_rate, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to(device)).logits predicted_sentence = processor_with_lm.batch_decode(logits.cpu().numpy()).text[0] ``` 2. To use on a local audio file without the language model ```python import torch import torchaudio from transformers import AutoModelForCTC, Wav2Vec2Processor device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = AutoModelForCTC.from_pretrained("bhuang/asr-wav2vec2-french").to(device) processor = Wav2Vec2Processor.from_pretrained("bhuang/asr-wav2vec2-french") model_sample_rate = processor.feature_extractor.sampling_rate wav_path = "example.wav" # path to your audio file waveform, sample_rate = torchaudio.load(wav_path) waveform = waveform.squeeze(axis=0) # mono # resample if sample_rate != model_sample_rate: resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate) waveform = resampler(waveform) # normalize input_dict = processor(waveform, sampling_rate=model_sample_rate, return_tensors="pt") with torch.inference_mode(): logits = model(input_dict.input_values.to(device)).logits # decode predicted_ids = torch.argmax(logits, dim=-1) predicted_sentence = processor.batch_decode(predicted_ids)[0] ``` ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_11_0` ```bash python eval.py \ --model_id "bhuang/asr-wav2vec2-french" \ --dataset "mozilla-foundation/common_voice_11_0" \ --config "fr" \ --split "test" \ --log_outputs \ --outdir "outputs/results_mozilla-foundatio_common_voice_11_0_with_lm" ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py \ --model_id "bhuang/asr-wav2vec2-french" \ --dataset "speech-recognition-community-v2/dev_data" \ --config "fr" \ --split "validation" \ --chunk_length_s 30.0 \ --stride_length_s 5.0 \ --log_outputs \ --outdir "outputs/results_speech-recognition-community-v2_dev_data_with_lm" ```
soduhh/marian-finetuned-kde4-en-to-fr
soduhh
2023-07-06T10:26:33Z
61
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T14:32:51Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: soduhh/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # soduhh/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6854 - Validation Loss: 0.8044 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0627 | 0.8795 | 0 | | 0.7968 | 0.8213 | 1 | | 0.6854 | 0.8044 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Tiru8055/rl_course_vizdoom_health_gathering_supreme
Tiru8055
2023-07-06T10:24:27Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T10:24:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.50 +/- 5.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Tiru8055/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
thirupathibandam/autotrain-phanik-gpt-neo-125m-self-72606138970
thirupathibandam
2023-07-06T10:01:36Z
0
0
null
[ "autotrain", "text-generation", "dataset:thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T10:00:49Z
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " datasets: - thirupathibandam/autotrain-data-phanik-gpt-neo-125m-self co2_eq_emissions: emissions: 0.03549660564532989 --- # Model Trained Using AutoTrain - Problem type: Text Generation - CO2 Emissions (in grams): 0.0355 ## Validation Metrics loss: 1.8581730127334595
blanchefort/rubert-base-cased-sentiment-mokoron
blanchefort
2023-07-06T09:56:44Z
129
1
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "sentiment", "ru", "dataset:RuTweetCorp", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment - text-classification datasets: - RuTweetCorp --- # RuBERT for Sentiment Analysis of Tweets This is a [DeepPavlov/rubert-base-cased-conversational](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational) model trained on [RuTweetCorp](https://study.mokoron.com/). ## Labels 0: POSITIVE 1: NEGATIVE ## How to use ```python import torch from transformers import AutoModelForSequenceClassification from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron') model = AutoModelForSequenceClassification.from_pretrained('blanchefort/rubert-base-cased-sentiment-mokoron', return_dict=True) @torch.no_grad() def predict(text): inputs = tokenizer(text, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**inputs) predicted = torch.nn.functional.softmax(outputs.logits, dim=1) predicted = torch.argmax(predicted, dim=1).numpy() return predicted ``` ## Dataset used for model training **[RuTweetCorp](https://study.mokoron.com/)** > Рубцова Ю. Автоматическое построение и анализ корпуса коротких текстов (постов микроблогов) для задачи разработки и тренировки тонового классификатора // Инженерия знаний и технологии семантического веба. – 2012. – Т. 1. – С. 109-116.
arham061/codeparrot-ds
arham061
2023-07-06T09:47:41Z
127
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T09:36:58Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-cbt-rarity-all-7k-p8k
NasimB
2023-07-06T09:41:44Z
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T07:38:30Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-all-7k-p8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-all-7k-p8k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7249 | 0.29 | 500 | 5.6400 | | 5.3729 | 0.59 | 1000 | 5.2003 | | 5.0283 | 0.88 | 1500 | 4.9502 | | 4.7537 | 1.17 | 2000 | 4.8035 | | 4.5903 | 1.47 | 2500 | 4.6765 | | 4.4832 | 1.76 | 3000 | 4.5717 | | 4.3484 | 2.05 | 3500 | 4.4930 | | 4.1512 | 2.35 | 4000 | 4.4467 | | 4.1329 | 2.64 | 4500 | 4.3805 | | 4.091 | 2.93 | 5000 | 4.3309 | | 3.8799 | 3.23 | 5500 | 4.3273 | | 3.8248 | 3.52 | 6000 | 4.2923 | | 3.8074 | 3.81 | 6500 | 4.2605 | | 3.6914 | 4.11 | 7000 | 4.2581 | | 3.534 | 4.4 | 7500 | 4.2538 | | 3.5261 | 4.69 | 8000 | 4.2382 | | 3.5255 | 4.99 | 8500 | 4.2256 | | 3.351 | 5.28 | 9000 | 4.2383 | | 3.3357 | 5.57 | 9500 | 4.2375 | | 3.3375 | 5.87 | 10000 | 4.2364 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
nolanaatama/mrdcrvcv2400pchscrckdfl
nolanaatama
2023-07-06T09:40:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T09:37:34Z
--- license: creativeml-openrail-m ---
GHonem/git-base-pokemon
GHonem
2023-07-06T09:38:02Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "git", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-03T13:37:45Z
--- license: mit base_model: microsoft/git-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0330 - Wer Score: 1.6516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.4022 | 4.17 | 50 | 4.7553 | 21.1384 | | 2.7988 | 8.33 | 100 | 0.9177 | 10.7623 | | 0.3496 | 12.5 | 150 | 0.0709 | 2.1170 | | 0.0373 | 16.67 | 200 | 0.0327 | 1.3170 | | 0.0142 | 20.83 | 250 | 0.0316 | 1.5031 | | 0.0069 | 25.0 | 300 | 0.0330 | 1.6516 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
squeeze-ai-lab/sq-opt-13b-w4-s0
squeeze-ai-lab
2023-07-06T09:29:03Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-06T08:38:38Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 4-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 4-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GGML
TheBloke
2023-07-06T09:26:48Z
0
4
null
[ "license:other", "region:us" ]
null
2023-07-06T09:14:38Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # CAMEL AI's CAMEL 33B Combined Data GGML These files are GGML format model files for [CAMEL AI's CAMEL 33B Combined Data](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/camel-ai/CAMEL-33B-Combined-Data) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | CAMEL-33B-combined-data-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 CAMEL-33B-combined-data-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: CAMEL AI's CAMEL 33B Combined Data CAMEL-33B-Combined-Data is a chat large language model obtained by finetuning LLaMA-33B model on a total of 229K conversations collected through our CAMEL framework, 100K English public conversations from ShareGPT that can be found here, and 52K instructions from Alpaca dataset that can be found here. We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-33B scores an average of 64.2. Regarding the prompt format, we follow the same prompt as LMSYS's [FastChat](https://github.com/lm-sys/FastChat/tree/main) Vicuna-13B-1.1 conversation template. It assumes a conversation between a user and AI assistant seperated by a <\/s> at the end of every role message. More details can be found [here](https://github.com/lm-sys/FastChat/blob/daa2b9abe20597ebf34dc5df164d450456610c74/fastchat/conversation.py#LL247C1-L247C1).
HilbertS/rl_course_vizdoom_health_gathering_supreme
HilbertS
2023-07-06T09:16:47Z
2
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-04T15:06:01Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.39 +/- 5.13 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r HilbertS/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Sekiraw/space_invaders
Sekiraw
2023-07-06T09:16:19Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-05T12:58:30Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 251.50 +/- 28.46 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sekiraw -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sekiraw ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 200000), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Abzu/mpt-30b-instruct-q8
Abzu
2023-07-06T09:11:11Z
20
5
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "custom_code", "arxiv:2205.14135", "arxiv:2108.12409", "license:cc-by-sa-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2023-06-30T07:59:31Z
--- license: cc-by-sa-3.0 datasets: - competition_math - conceptofmind/cot_submix_original/cot_gsm8k - knkarthick/dialogsum - mosaicml/dolly_hhrlhf - duorc - tau/scrolls/qasper - emozilla/quality - scrolls/summ_screen_fd - spider tags: - Composer - MosaicML - llm-foundry inference: false --- # MosaicML's MPT-30B-Instruct 8-bit These files are .safetensors format model files for [MosaicML's MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct). ## How to convert ```python # Load the model name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! start_time = time.time() model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True, load_in_8bit=True ) # Filter the non-tensor items def filter_dict(dictionary): filtered_dict = {key: value for key, value in dictionary.items() if "weight_format" not in key} return filtered_dict new_state_dict = filter_dict(model.state_dict()) # Save the 8-bit model model.save_pretrained('mpt-30b-instruct-8bits', state_dict=new_state_dict, safe_serialization=True) ``` ## How to use ```python # Load the model model = transformers.AutoModelForCausalLM.from_pretrained( 'mpt-30b-instruct-8bits', trust_remote_code=True, ) ``` ## Prompt template ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction {prompt} ### Response ``` # MPT-30B-Instruct MPT-30B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider). * License: _CC-By-SA-3.0_ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date June 22, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Question/Instruction **Bespokenizer46** > I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform. > Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important. > End the email with a friendly inquiry about Phyllis's family. **MPT-30B-Instruct**: > Phyllis - > I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in. > LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy. > They also provide tools to easily connect to and use the model in your daily workflow. > I think you'd really enjoy speaking with their founder, we can set up a call if you're interested. > Also, I know it's been a tough year for your family, how are things? > Best, > Your Friend ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b-instruct', trust_remote_code=True ) ``` To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ### Formatting This model was trained on data formatted as follows: ```python def format_prompt(instruction): template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n" return template.format(instruction=instruction) example = "Tell me a funny joke.\nDon't make it too funny though." fmt_ex = format_prompt(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Data Mix The model was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | |-------------|----------------------------|------------| | competition_math | 1.6 M | 3.66% | | cot_gsm8k | 3.36 M | 7.67% | | dialogsum | 0.1 M | 0.23% | | dolly_hhrlhf | 5.89 M | 13.43% | | duorc | 7.8 M | 17.80% | | qasper | 8.72 M | 19.90% | | quality | 11.29 M | 25.78% | | scrolls/summ_screen_fd | 4.97 M | 11.33% | | spider | 0.089 M | 0.20% | ## PreTraining Data For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ### Training Configuration This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
squeeze-ai-lab/sq-opt-6.7b-w3-s0
squeeze-ai-lab
2023-07-06T09:09:02Z
0
0
null
[ "arxiv:2306.07629", "arxiv:2205.01068", "region:us" ]
null
2023-07-06T08:28:30Z
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving. **TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization. But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method. Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance, as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach, we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality. For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf). ## Model description 3-bit quantized OPT 6.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf). * **Base Model:** [OPT 6.7B](https://arxiv.org/abs/2205.01068) * **Bitwidth:** 3-bit * **Sparsity Level:** 0% (dense-only) ## Links * **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf) * **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM) --- license: other ---
viceisi/identify-my-cat
viceisi
2023-07-06T08:54:29Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-06-28T15:18:19Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
aronmal/Reinforce-PixelCopterMLP
aronmal
2023-07-06T08:42:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T08:41:58Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopterMLP results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 18.60 +/- 14.97 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery
m-aliabbas1
2023-07-06T08:31:44Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T08:31:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="m-aliabbas1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fadliaulawi/dummy-model
fadliaulawi
2023-07-06T08:25:22Z
59
0
transformers
[ "transformers", "tf", "camembert", "fill-mask", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T07:56:53Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
zhdhfhd33/segformer-b0-scene-parse-150-minkun
zhdhfhd33
2023-07-06T08:16:44Z
31
0
transformers
[ "transformers", "pytorch", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "license:other", "endpoints_compatible", "region:us" ]
null
2023-07-06T08:16:15Z
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: segformer-b0-scene-parse-150-minkun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150-minkun This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Ainuraa/ppo-LunarLander-v2
Ainuraa
2023-07-06T08:16:27Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T08:15:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.64 +/- 27.44 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CICLAB-Comillas/BARTSumpson
CICLAB-Comillas
2023-07-06T08:12:24Z
106
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-05T08:49:23Z
--- license: mit language: - es ---
hafezd-datasaur/autotrain-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8-72580138964
hafezd-datasaur
2023-07-06T07:56:16Z
107
0
transformers
[ "transformers", "pytorch", "safetensors", "deberta", "text-classification", "autotrain", "en", "dataset:hafezd-datasaur/autotrain-data-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-06T07:55:35Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain" datasets: - hafezd-datasaur/autotrain-data-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8 co2_eq_emissions: emissions: 0.022444083442419303 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 72580138964 - CO2 Emissions (in grams): 0.0224 ## Validation Metrics - Loss: 1.573 - Accuracy: 0.318 - Macro F1: 0.097 - Micro F1: 0.318 - Weighted F1: 0.154 - Macro Precision: 0.064 - Micro Precision: 0.318 - Weighted Precision: 0.101 - Macro Recall: 0.200 - Micro Recall: 0.318 - Weighted Recall: 0.318 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/hafezd-datasaur/autotrain-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8-72580138964 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("hafezd-datasaur/autotrain-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8-72580138964", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("hafezd-datasaur/autotrain-7206-17163967-caa2-4b1a-89b2-62f2ce6b22b8-72580138964", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
xian79/Reinforce-CartPole-v1
xian79
2023-07-06T07:51:38Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:51:27Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Technotech/RedPajama-Base-3B-4bit-128g
Technotech
2023-07-06T07:49:49Z
5
0
transformers
[ "transformers", "gpt_neox", "text-generation", "gptq", "en", "dataset:togethercomputer/RedPajama-Data-1T", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-12T09:18:42Z
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T tags: - gptq --- ## RedPajama-Base-3B-4bit-128g RedPajama 3B, quantised to 4bit with groupsize of 128, no act order. # Original Model Card # RedPajama-INCITE-Base-3B-v1 RedPajama-INCITE-Base-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program. - Base Model: [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) - Instruction-tuned Version: [RedPajama-INCITE-Instruct-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) - Chat Version: [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. # Quick Start Please note that the model requires `transformers` version >= 4.25.1. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today. Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the... """ ``` ## GPU Inference in Int8 To run inference with int8, please ensure you have installed accelerate and bitandbytes. You can install them with the following command: ```bash pip install accelerate pip install bitsandbytes ``` Then you can run inference with int8 as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ the man who cracked the Enigma code during World War II, and who was later convicted of homosexual acts. He was a brilliant mathematician, and a visionary who foresaw the computer age.... """ ``` ## CPU Inference You can run inference on CPU as follows: ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1") model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Base-3B-v1", torch_dtype=torch.bfloat16) # infer prompt = "Alan Turing is" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) """ a name that is synonymous with the history of computer science. As the man who invented the Turing machine, the mathematical model that defines the limits of what can be computed, Turing is credited with the invention of the modern computer. Turing was also a mathematician and logician, and his work in these fields led to the development of the field of artificial intelligence... """ ``` Please note that since `LayerNormKernelImpl` is not implemented in fp16 for CPU, we use `bfloat16` for CPU inference. # Uses Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner. #### Out-of-Scope Use `RedPajama-INCITE-Base-3B-v1` is a language model and may not perform well for other use cases outside of its intended scope. For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society. It is important to consider the limitations of the model and to only use it for its intended purpose. #### Misuse and Malicious Use `RedPajama-INCITE-Base-3B-v1` is designed for language modeling. Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project. Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating fake news, misinformation, or propaganda - Promoting hate speech, discrimination, or violence against individuals or groups - Impersonating individuals or organizations without their consent - Engaging in cyberbullying or harassment - Defamatory content - Spamming or scamming - Sharing confidential or sensitive information without proper authorization - Violating the terms of use of the model or the data used to train it - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming ## Limitations `RedPajama-INCITE-Base-3B-v1`, like other language models, has limitations that should be taken into consideration. For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data. We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot. ## Training **Training Data** Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) **Training Procedure** - **Hardware:** 256 nodes of 6xV100 (IBM Power9), on the OLCF Summit cluster - **Optimizer:** Apex FusedAdam - **Parallelism:** Pipeline parallel 6, tensor parallel 2 - **Gradient Accumulations**: 8 (global batch size 4M tokens) - **Num of Tokens:** 800B Tokens - **Learning rate:** 0.00016 ## Benchmark Please refer to our [blog post](https://together.xyz) for benchmark results. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
zhundred/ppo-Huggy
zhundred
2023-07-06T07:46:14Z
33
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T07:42:52Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: zhundred/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
liyingjian/ppo-LunarLander-v2
liyingjian
2023-07-06T07:38:40Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T06:36:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.29 +/- 21.11 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Vtmpas/ppo-LunarLander-v2
Vtmpas
2023-07-06T07:36:16Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T07:35:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 240.43 +/- 16.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Word2vec/nlpl_222
Word2vec
2023-07-06T07:31:04Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_November_2021", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T08:01:35Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_November_2021 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 199807 corresponding to 2717675616 tokens from the dataset `English_Wikipedia_Dump_of_November_2021`. The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_222", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/222.zip
Word2vec/nlpl_207
Word2vec
2023-07-06T07:30:10Z
0
0
null
[ "word2vec", "pol", "dataset:Polish_CommonCrawl_Dump_of_December_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T09:08:03Z
--- language: pol license: cc-by-4.0 tags: - word2vec datasets: Polish_CommonCrawl_Dump_of_December_2019 --- ## Information A word2vec model trained by Krzysztof Wolk ([email protected]) on a vocabulary of size 35193029 corresponding to 32565035188 tokens from the dataset `Polish_CommonCrawl_Dump_of_December_2019`. The model is trained with the following properties: no lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 5 and dimension of 100. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_207", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/207.zip
Word2vec/nlpl_204
Word2vec
2023-07-06T07:29:15Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_National_Corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:56:31Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_National_Corpus --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 998459 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Bag-of-Words with window of 2 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_204", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/204.zip
NTQAI/pedestrian_age_recognition
NTQAI
2023-07-06T07:28:59Z
110,387
3
transformers
[ "transformers", "pytorch", "safetensors", "beit", "image-classification", "vision", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-09T03:36:33Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer metrics: - accuracy model-index: - name: pedestrian_age_recognition_local results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8073394495412844 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pedestrian_age_recognition_local This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5004 - Accuracy: 0.8073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8849 | 1.0 | 2008 | 0.7939 | 0.6807 | | 0.9836 | 2.0 | 4016 | 0.6694 | 0.7336 | | 0.8128 | 3.0 | 6024 | 0.5768 | 0.7668 | | 0.7611 | 4.0 | 8032 | 0.5541 | 0.7833 | | 0.6441 | 5.0 | 10040 | 0.5473 | 0.7773 | | 0.5696 | 6.0 | 12048 | 0.5187 | 0.7971 | | 0.6925 | 7.0 | 14056 | 0.5082 | 0.8038 | | 0.5711 | 8.0 | 16064 | 0.5092 | 0.8098 | | 0.7741 | 9.0 | 18072 | 0.5026 | 0.8020 | | 0.5269 | 10.0 | 20080 | 0.5004 | 0.8073 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1 ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
Word2vec/nlpl_200
Word2vec
2023-07-06T07:28:57Z
0
0
null
[ "word2vec", "eng", "dataset:English_Wikipedia_Dump_of_October_2019", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:56:11Z
--- language: eng license: cc-by-4.0 tags: - word2vec datasets: English_Wikipedia_Dump_of_October_2019 --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249212 corresponding to 3530685741 tokens from the dataset `English_Wikipedia_Dump_of_October_2019`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 3 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_200", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/200.zip
Word2vec/nlpl_186
Word2vec
2023-07-06T07:28:40Z
0
0
null
[ "word2vec", "rus", "dataset:Taiga_corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:55:53Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Taiga_corpus --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249946 corresponding to 4867000000 tokens from the dataset `Taiga_corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 5 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_186", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/186.zip
Word2vec/nlpl_185
Word2vec
2023-07-06T07:28:21Z
0
0
null
[ "word2vec", "rus", "dataset:Taiga_corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:55:28Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Taiga_corpus --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 249565 corresponding to 4867000000 tokens from the dataset `Taiga_corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_185", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/185.zip
Word2vec/nlpl_182
Word2vec
2023-07-06T07:27:18Z
0
0
null
[ "word2vec", "rus", "dataset:Russian_National_Corpus", "license:cc-by-4.0", "region:us" ]
null
2023-07-05T07:54:36Z
--- language: rus license: cc-by-4.0 tags: - word2vec datasets: Russian_National_Corpus --- ## Information A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 248978 corresponding to 270000000 tokens from the dataset `Russian_National_Corpus`. The model is trained with the following properties: lemmatization and postag with the algorith Gensim Continuous Skipgram with window of 2 and dimension of 300. ## How to use? ``` from gensim.models import KeyedVectors from huggingface_hub import hf_hub_download model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_182", filename="model.bin"), binary=True, unicode_errors="ignore") ``` ## Citation Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7 This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019. Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information. The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/182.zip
Bugsys0302/beltbr
Bugsys0302
2023-07-06T06:59:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T06:57:43Z
--- license: creativeml-openrail-m ---
afaan00733/my_awesome_model
afaan00733
2023-07-06T06:56:30Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-04T21:18:08Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6546 - Accuracy: 0.4737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 2 | 0.6732 | 0.4737 | | No log | 2.0 | 4 | 0.6546 | 0.4737 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
rohanbalkondekar/spicy-caiman
rohanbalkondekar
2023-07-06T06:55:23Z
10
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T06:48:59Z
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.30.1 pip install accelerate==0.20.3 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="BeRohan/spicy-caiman", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "BeRohan/spicy-caiman", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "BeRohan/spicy-caiman", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "BeRohan/spicy-caiman" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=1, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BeRohan/spicy-caiman --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
IliyanGochev/whisper-small-bg
IliyanGochev
2023-07-06T06:50:12Z
18
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "bg", "dataset:mozilla-foundation/common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T08:04:03Z
--- language: - bg license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: whisper-small-bg results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_13_0 bg type: mozilla-foundation/common_voice_13_0 config: bg split: test args: bg metrics: - name: Wer type: wer value: 44.67291341315287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-bg This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 bg dataset. It achieves the following results on the evaluation set: - Loss: 9.0612 - Wer: 44.6729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 4.9319 | 6.76 | 1000 | 10.0774 | 73.9892 | | 2.6116 | 13.51 | 2000 | 11.4089 | 67.0484 | | 0.9607 | 20.27 | 3000 | 11.8266 | 60.9448 | | 0.3464 | 27.03 | 4000 | 9.9500 | 52.1213 | | 0.0122 | 33.78 | 5000 | 9.0612 | 44.6729 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Bugsys0302/fmmstrb
Bugsys0302
2023-07-06T06:46:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T06:40:45Z
--- license: creativeml-openrail-m ---
JennnDexter/pokemon-lora
JennnDexter
2023-07-06T06:44:42Z
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-12T06:24:16Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - JennnDexter/pokemon-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
NasimB/gpt2-concat-aochildes-16plus6k
NasimB
2023-07-06T06:39:38Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T04:47:18Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-aochildes-16plus6k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-aochildes-16plus6k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7265 | 0.3 | 500 | 5.6481 | | 5.3801 | 0.59 | 1000 | 5.2065 | | 5.0346 | 0.89 | 1500 | 4.9518 | | 4.7589 | 1.19 | 2000 | 4.8123 | | 4.6003 | 1.48 | 2500 | 4.6915 | | 4.4941 | 1.78 | 3000 | 4.5806 | | 4.3447 | 2.07 | 3500 | 4.5155 | | 4.1761 | 2.37 | 4000 | 4.4640 | | 4.1351 | 2.67 | 4500 | 4.4014 | | 4.1043 | 2.96 | 5000 | 4.3576 | | 3.8639 | 3.26 | 5500 | 4.3597 | | 3.8432 | 3.56 | 6000 | 4.3266 | | 3.8118 | 3.85 | 6500 | 4.2913 | | 3.6736 | 4.15 | 7000 | 4.2957 | | 3.5472 | 4.45 | 7500 | 4.2920 | | 3.5398 | 4.74 | 8000 | 4.2794 | | 3.507 | 5.04 | 8500 | 4.2806 | | 3.3499 | 5.33 | 9000 | 4.2855 | | 3.3504 | 5.63 | 9500 | 4.2851 | | 3.3498 | 5.93 | 10000 | 4.2849 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
cherrue/RandomCrop_Rescale_epoch_3_learning_rate_5e_5_decay_0_01
cherrue
2023-07-06T06:30:06Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T05:35:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cherrue/pricetag_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cherrue/pricetag_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0546 - Validation Loss: 1.2226 - Train Accuracy: 0.3846 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1251, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3379 | 1.2276 | 0.5128 | 0 | | 1.1973 | 1.1561 | 0.4615 | 1 | | 1.0546 | 1.2226 | 0.3846 | 2 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
hchung1017/aihub_012_streaming_conformer
hchung1017
2023-07-06T06:22:30Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "ko", "dataset:aihub_012", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2023-07-06T06:22:07Z
--- tags: - espnet - audio - automatic-speech-recognition language: ko datasets: - aihub_012 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `hchung1017/aihub_012_streaming_conformer` This model was trained by hchung1017 using aihub_012 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout f4d7fead71e2a99541a8d3d66d6e00a33d9e82df pip install -e . cd egs2/aihub_012/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model hchung1017/aihub_012_streaming_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Jul 5 15:19:05 KST 2023` - python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]` - espnet version: `espnet 202304` - pytorch version: `pytorch 1.13.1` - Git hash: `f4d7fead71e2a99541a8d3d66d6e00a33d9e82df` - Commit date: `Wed May 24 14:58:35 2023 -0400` ## exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp/decode_asr_streaming_asr_model_valid.acc.ave ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |org/dev|797676|3794053|89.7|9.1|1.2|1.4|11.8|28.9| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |org/dev|797676|17636048|94.8|3.0|2.2|1.6|6.8|28.9| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |org/dev|797676|4325914|88.1|8.2|3.7|1.5|13.4|28.9| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_streaming_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_streaming_conformer_raw_ko_bpe5000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 51405 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - acc early_stopping_criterion: - valid - cer_ctc - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_ko_bpe5000_sp/train/speech_shape - exp/asr_stats_raw_ko_bpe5000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_ko_bpe5000_sp/valid/speech_shape - exp/asr_stats_raw_ko_bpe5000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 chunk_excluded_key_prefixes: [] train_data_path_and_name_and_type: - - /data/dump/aihub_012/raw/train_sp/wav.scp - speech - sound - - /data/dump/aihub_012/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - /data/dump/aihub_012/raw/dev/wav.scp - speech - sound - - /data/dump/aihub_012/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.003 scheduler: warmuplr scheduler_conf: warmup_steps: 30000 token_list: - <blank> - <unk> - ▁I - ▁YOU - '''' - S - ▁WHAT - ▁A - ▁IT - ▁TO - ▁IS - ▁THE - ▁ARE - ▁CAN - ▁OKAY - ▁YES - ▁DO - ▁THAT - ▁SEE - T - ▁HE - ▁HOW - ▁ME - ▁HAVE - ▁MY - ▁GOOD - ▁REALLY - ▁SO - ▁FOR - ▁AM - ▁SURE - ▁OH - ▁GO - ▁WHY - ▁NO - ▁YOUR - ▁RIGHT - ▁HELP - ’ - ▁DON - ▁NOT - ▁HI - ▁HERE - ▁DID - ▁LIKE - ▁AND - ▁TOO - ▁SHE - ▁THIS - ▁HELLO - M - ▁KNOW - ▁WANT - RE - ▁NEED - ▁WILL - ▁ABOUT - ▁THERE - ▁LET - ▁OF - ▁IN - ▁BE - ▁BUT - ▁THINK - ▁SOMETHING - ▁LOOK - ▁NOW - ▁NICE - ▁THEN - ▁ - ▁WE - ▁GREAT - ▁THANK - ▁WITH - ▁TELL - ▁PROBLEM - ▁HER - ▁GOING - ▁WAS - ▁DOING - ▁ASK - ▁THANKS - ▁HEY - ▁BACK - ▁WRONG - ▁THEY - ▁ON - ▁HIM - ▁UP - ▁AT - LL - ▁WELL - ▁GET - ▁WHERE - VERY - ▁SOME - ▁PEOPLE - ▁ALL - ▁MEAN - ▁PLEASE - ▁TIME - ▁WHO - ▁GOT - ▁WELCOME - ▁MAKE - ▁COME - ▁MEET - ▁NEW - ▁LOT - ▁MOM - ▁SAID - ▁SHOULD - ▁HAPPY - ▁HIS - ▁BUSY - ▁BYE - ▁QUESTION - ▁SAY - ▁TAKE - ▁MORE - ▁SORRY - ▁IDEA - ▁OUT - ▁FINE - ▁PLAY - ▁ANY - ▁AGAIN - ▁BECAUSE - ▁FROM - ▁AN - ▁WHEN - ▁TRY - ▁HAS - ▁TODAY - ▁READY - ▁HOPE - ▁GIVE - ▁BIG - ▁FRIEND - ▁WRITE - ▁EAT - ▁ONE - ▁BAD - ▁MUCH - ▁SOON - ▁MANY - ED - ▁THEM - ▁ANGRY - ▁LATER - ING - ▁MAYBE - ▁DAD - ▁FIND - ▁DOWN - ▁WORRY - ▁SHOW - ▁COURSE - ▁DAY - ▁SOUNDS - ▁DOES - ▁STRANGE - ▁TALK - ▁FUN - ▁REMEMBER - ▁ANYTHING - ▁BUY - ▁LETTER - ▁JUST - ▁MADE - ▁READ - ▁CANNOT - ▁WANTS - ▁WOW - ▁DIDN - ▁IF - ▁GLAD - ▁WAY - ▁MUST - ▁SCHOOL - ▁BOOK - ▁LOOKING - ▁TOLD - ▁NAME - ▁HEAR - ▁TOY - ▁TRUE - ▁TEACHER - ▁US - ▁WORK - ▁TWO - ▁SONG - ▁HARD - ▁LOVE - ▁THINGS - ▁SING - ▁BETTER - ▁HOME - ▁LINKER - ▁UNDERSTAND - ▁LOOKS - ▁KIND - ▁HOUSE - LUE - ▁DRESS - ▁BY - ▁BEST - ▁LONG - ▁NEWS - ▁WENT - ▁HAPPENED - ▁OLD - ▁KEEP - ▁NEXT - ▁CHECK - D - ▁SPECIAL - ▁USE - ▁LIKES - ▁EVERYTHING - ▁FEEL - ▁ROBOT - ▁SAD - ▁PLEASURE - ▁JOE - ▁COOL - ▁TOMORROW - ▁LUCK - ▁DOESN - ▁BOX - ▁AROUND - ▁HOMEWORK - ▁ALWAYS - ▁MORGAN - ▁PUT - ▁THESE - ▁GAVE - ▁HEARD - ▁WAIT - ▁PRESENT - ▁SOMEONE - ▁PARTY - ▁BIRTHDAY - ▁RANDY - ▁FRIENDS - ▁MONEY - ▁DONE - ▁CAR - ▁COFFEE - ▁MUSIC - ▁BEN - ▁BEEN - ▁STILL - ▁GREEN - ▁STAR - ▁PERSON - ▁WERE - ▁STORY - ▁ELSE - ▁IDEAS - ▁TOGETHER - ▁MILK - ▁WOULD - ▁SOUND - ▁THAN - ▁TALKED - ▁EVERY - ▁NEEDS - ▁SAW - ▁HAIR - ▁CHANGE - ▁WORRIED - ▁EASY - ▁FOOD - ▁DOG - VE - ▁CONCERT - ▁MAKING - ▁MONSTER - ▁BOY - ▁PHOTO - ▁SCARY - ▁RED - ▁BROTHER - ▁FIRST - ▁DANCE - ▁BEFORE - ▁PRETTY - ▁DRINK - ▁WISH - ▁HARRY - ▁CALM - ▁CAT - ▁WEAR - ▁BLUE - ▁MESSAGE - ▁TRUST - ▁ONLY - ▁HAD - ▁THREE - ▁AWAY - ▁MIND - ▁MAKES - ▁GRANDMOTHER - ▁WATCH - ▁EMMA - ▁AMY - ▁TIRED - ▁CLASS - ▁MAN - ▁DAN - ▁COULD - ▁BRING - ▁SMALL - ▁ANYWAY - ▁OUR - ▁ROOM - ▁AFTER - ▁BELIEVE - ▁BOOKS - ▁TEN - ▁DEVILMON - ▁JOB - ▁OVER - ▁COMING - ▁STOP - ▁FUNNY - ▁DIANA - ▁TOYS - ▁FAST - ▁MORNING - ▁NUMBER - ▁NOTHING - ▁TOWN - ▁OPEN - ▁OTHER - ▁PHONE - ▁CARE - ▁LEAVE - ▁CONTEST - ▁WOODY - ▁THINKING - Y - ▁ANOTHER - A - ▁ENGLISH - ▁SICK - ▁BRAVE - ▁TROY - ▁EATING - ▁SLEEP - ▁THEIR - ▁SELL - ▁DELICIOUS - ▁OFF - ▁WATER - ▁PICTURE - ▁CAME - ▁EVERYONE - ▁PAPER - ▁PARK - ▁PAINT - ▁SHOP - ▁CREAM - ▁TV - ▁BOUGHT - ▁CAREFUL - ▁ROBBY - ▁FOUND - ▁STONE - ▁SISTER - ▁HURRY - ▁BAG - ▁WAKE - ▁SYRUP - ▁DRAW - ▁ENERGY - ▁SHOES - ▁IMPORTANT - ▁NEVER - ▁LISTEN - ▁WON - ▁DOOR - ▁POP - ▁LAST - ▁DIFFERENT - ▁FISH - ▁SAVE - ▁HEALTHY - ▁UNCLE - ▁NIGHT - UCH - ▁PLACE - ▁DARK - ▁GUESS - ▁LATE - ▁PIE - N - ▁PRACTICE - ▁MONICA - ▁ANYONE - ▁READING - ▁COLOR - ▁SALLY - ▁BLACK - ▁MOVIE - ▁TROUBLE - ▁COLD - ▁STUDY - ▁LITTLE - ▁WHITE - ▁CHEER - ▁SCARED - ▁POSTER - ▁TALKING - ▁TEACH - ▁WALK - ▁CAKE - ▁INTO - ▁FIGHT - ▁ALREADY - ▁SLEEPY - ▁STRONG - ▁OLIVIA - ▁CALL - ▁WROTE - ▁ICE - ▁OR - ▁SCOTT - ▁LIBRARY - ▁NANCY - ▁LUMY - ▁HAT - ▁YET - ▁ALEX - ▁SHORT - ▁CLOTHES - ▁YESTERDAY - ▁FAVORITE - ▁SWEET - ▁FIVE - ▁HOLD - ▁LUNCH - ▁PLAYING - ▁GARY - ▁HANDS - ▁LEFT - ▁ASKED - ▁CHEESE - ▁FACE - ▁BORROW - ▁SPEAK - ▁INTERESTING - ▁MAY - ▁BEAR - ▁SIGN - ▁SHADOW - ▁FLOWERS - ▁PINO - ▁ERIN - ▁FOREST - ▁GAME - ▁MR - ▁WANTED - ▁RUN - ▁SPELL - ▁PEN - ▁SHOPPING - ▁COOK - ▁DAYS - ▁BED - ▁BEAUTIFUL - ▁MUSEUM - ▁CLEAN - ▁REST - ▁SAME - ▁DOCTOR - ▁YOURSELF - ▁DINNER - ▁DANGEROUS - ▁SECRET - ▁STORE - ▁TREE - ▁MIGHT - ▁MAYOR - ▁CHARLIE - ▁PIZZA - ▁FOUR - ▁SIR - ▁SEEN - ▁TURN - ▁ENJOY - ▁CLARA - ▁ANYTIME - ▁LIVE - ▁LOST - ▁SANDRA - ▁DURING - ▁MYSELF - ▁TALL - ▁MINE - ▁CHOOSE - ▁TOOK - ▁WAITING - ▁S - ▁SUNNY - ▁SINGING - ▁ACADEMY - ▁AHEAD - ▁HURT - ▁CLOCK - ▁PAINTING - ▁RAN - ▁ALONE - ▁USED - ▁PLAN - ▁THEATER - ▁HAND - ▁WEEK - ▁CATCH - ▁SEND - ▁CUBE - ▁ERIC - ▁WOOD - ▁HOT - ▁DEVILMONS - ▁FREE - ▁STAY - ▁PROMISE - ▁RULE - ▁HUNGRY - ▁WORKING - ▁HAPPEN - ▁VIKI - ▁FAMILY - ▁CHICKEN - ▁FORGET - ▁YELLOW - ▁BROWN - ▁VACATION - ▁KELLY - ▁JACK - ▁SINGER - ▁HAMMER - ▁SAYS - ▁TRAIN - ▁FIX - ▁CUTE - ▁EVEN - ▁SANTA - ▁SLEEPING - ▁BUS - ▁BARBECUE - ▁AGREE - ▁COULDN - ▁MISS - E - ▁GRACE - ▁TRASH - ▁BABY - ▁LUMA - ▁CHILDREN - ▁EXCUSE - ▁DPOP - ▁OUTSIDE - ▁ORDER - ▁MATTER - ▁RIDE - ▁SUMMER - ▁CLOSE - ▁MOVE - ▁JUICE - ▁TOUCH - ▁CARD - ▁THOSE - ▁HAIRSTYLE - ▁RICH - ▁BREAK - ▁ANYMORE - ▁TRIP - ▁EYES - ▁LEARN - IC - ▁YOUNGER - ▁SMELLS - ▁CHRIS - ▁ITEMS - ▁STONES - ▁CUT - ▁STUDENT - ▁CALLED - ▁SHINE - ▁ATE - ▁PERFECT - ▁BETIA - ▁MOVING - LY - ▁FIRE - ▁D - ▁CHRISTMAS - ▁RUNNING - ▁LINE - ▁JACKET - ▁WHICH - ▁GIFT - ▁SMILE - ▁WEARING - ▁STELLA - ▁SEVEN - ▁ANSWER - ▁YEAR - ▁MOST - ▁WENDY - RA - ▁BALL - ▁THING - ▁FIFTY - ▁YOUNG - ▁FRONT - ▁LIKED - ▁WINDOW - ▁BEING - ▁RICE - ▁HOBBY - ▁BRUCE - ▁ALVIN - ▁CHAIR - ▁ELEVEN - ▁INTERVIEW - ▁TRUMPET - ▁DRAWING - ▁WHILE - ▁HAV - ▁NEWSPAPER - ▁WRITING - ▁FRUIT - ▁BEHIND - ▁EVENT - ▁HAVEN - ▁BELLOW - ▁YEARS - ▁DIV - ▁VICTORIA - ▁SENT - ▁STYLE - ▁LUNA - ▁AUNT - ▁DREAM - ▁PICTURES - ▁LEO - ▁QUESTIONS - ▁PRICE - ▁APPLE - ▁SCHEDULE - ▁TABLE - ▁PLANT - ▁BELL - ▁SUSAN - ▁SHIRT - ▁GRANDFATHER - ▁EXPENSIVE - ▁GUYS - ▁THOUGHT - ▁OSCAR - ▁TIMES - ▁ACTUALLY - ▁CHANCE - ▁PAY - ▁WASH - ▁JUGGLING - ▁JULIA - ▁MAKEUP - ▁PIANO - ▁GOES - ▁QUIZ - ▁OFTEN - ▁THIRTY - ▁SMART - ▁WEEKEND - ▁CHOCOLATE - ▁BATHROOM - ▁CANDY - ▁SPEECH - ▁FEELING - ▁RADIO - ▁HECTOR - ▁KNOWS - ▁GRANDMA - ▁SEEM - ER - ▁START - ▁PENCIL - ▁SUNDAY - ▁WORD - ▁MOUSE - ▁PLAYGROUND - ▁BREAD - ▁MAGIC - ▁CD - ▁BROKEN - ▁COLIN - ▁DIRTY - ▁MOTHER - ▁DESK - ▁BORING - ▁SOUP - ▁ONCE - ▁WORKED - ▁COUNT - ▁EXCITED - ▁PARADE - ▁GUITAR - ▁PM - ▁FINISH - ▁BLOCK - ▁FISHING - ▁VOICE - ▁ROGER - ▁WORKS - ▁PLAYER - ▁GLASSES - ▁LAB - ▁SIGH - ▁LOVES - ▁MODEL - ▁EXERCISE - ▁O - ▁POINT - ▁SWIMMING - ▁MARKET - ▁NOTE - ▁SECOND - ▁LUCKY - ▁BROKE - ▁CAVE - ▁SHALL - ▁KID - ▁HANG - ▁MICHAEL - ▁DANCING - ▁COM - ▁MASK - TING - ▁KYLE - ▁FRIDAY - ▁MELOD - ▁DOUGLAS - ▁ENOUGH - ▁LEARNED - ▁ALICE - ▁NEWSPAPERS - ▁NEAR - ▁GIRL - ▁LAURA - ▁BANK - ▁ORANGE - ▁HEART - ▁SNACKS - ▁BANANA - ▁AFRAID - ▁NOISE - ▁AARON - ▁SIDE - ▁POSSIBLE - ▁ISN - ▁UPSET - ▁KATHY - ▁ENTER - ▁STATUE - ▁FAVOR - ▁CAPSULE - ▁CLUB - ▁BORED - ▁STREET - ▁FAR - ▁BROUGHT - ▁HENRY - ▁BRIAN - ▁FLOOR - ▁RECORD - ▁SUN - ▁BORN - ▁GONE - ▁ELEPHANT - ▁FATHER - ▁BEAT - ▁MISTAKE - NY - ▁MEGAN - ▁JIN - ▁CARL - ▁FACTORY - ▁HORSE - ▁STANLEY - ▁WIN - ▁AFTERNOON - ▁LIVED - ▁HIGH - ▁LEAVING - ▁MINUTES - ▁WALL - ▁SURPRISE - ▁DAVID - ▁TWENTY - ▁BIRD - ▁NICK - ▁REASON - ▁OWN - ▁STEVE - ▁LADY - ▁COMES - ▁STATION - ▁DOLL - ▁JADE - ▁STAND - ▁FAMOUS - ▁PLAYED - ▁TSHIRT - ▁HUEY - ▁SEA - ▁SIX - ▁REPORT - ▁POPULAR - ▁PICK - ▁TONY - ▁TINA - ▁KIDS - ▁WEATHER - ▁TREES - ▁TIFFANY - ▁WONDERFUL - ▁RING - ▁SOMEWHERE - ▁LIGHT - ▁NOSE - ▁AUDREY - ▁CAMERA - ▁GARDEN - ▁SOCCER - ▁PIG - ▁FRESH - ▁NOBODY - ▁AMANDA - ▁SURPRISED - ▁STOPPED - ▁CITY - ▁KOREAN - ▁HISTORY - ▁STUDENTS - ▁COOKING - L - ▁LOUD - ▁LOSE - ▁PINK - ▁LIE - ▁CRAYONS - ▁HEALTH - ▁HANDWRITING - ▁JOIN - ▁THROW - ▁INFORMATION - ▁DIFFICULT - ▁SOMETIMES - ▁BIKE - ▁WOMAN - ▁FLOWER - ▁WORDS - ▁GHOST - ▁RICKY - R - ▁TEETH - ▁SAYING - ▁PIECE - ▁DR - ▁CHANGED - ▁SIT - ▁ARTICLE - ▁ARM - ▁BECOME - ▁MONKEY - ▁YEAH - ▁JUDY - ▁FOLLOW - ▁ALSO - ▁GAMES - ▁BAND - ▁COMPUTER - ▁ANDRE - ▁EATS - ▁MATH - ▁EXACTLY - ▁ART - ▁JUMP - ▁FOODS - ▁PRESENTS - ▁RABBIT - ▁SMELL - ▁HEAVY - ▁SWIM - ▁RICHARD - ▁GRASS - ▁BOTHER - ▁PANTS - ES - ▁ALMOST - ▁HELPING - ▁ZOO - ▁SHOULDN - ▁FAN - ▁EGGS - ▁ELLA - ▁RESTAURANT - ▁CHIPS - ▁BIGGER - ▁MONDAY - ▁CATS - ▁STUDYING - ▁TONIGHT - ▁BRADY - ▁SERIOUS - ▁FORGOT - ▁VISIT - ▁BUILDING - ▁SET - ▁HANDSOME - ▁CLAUS - ▁RALPH - ▁COMPANY - ▁SEAT - ▁ANDREW - ▁WITHOUT - EN - ▁MEAT - ▁BOARD - ▁CLASSES - ▁FLY - ▁BIT - ▁ANGELA - ▁POLICE - ▁BET - ▁FINISHED - ▁EITHER - ▁SKY - ▁POLIA - ▁EIGHT - ▁AMAZING - ▁INSIDE - ▁SATURDAY - ▁DINOSAUR - ▁DEVERYTHING - ▁BRUSH - ▁VIVIEN - ▁BREAKFAST - ▁QUICKLY - ▁HEAD - ▁CAROL - ▁EACH - ▁BANANAS - ▁JAZZ - ▁OWEN - ▁LEAVES - ▁HELPED - ▁WINTER - ▁REAL - ▁TRUTH - ▁RIVER - ▁ROAD - ▁ANNA - ▁INTERESTED - ▁EVERYBODY - ▁HIMSELF - ▁TAKES - ▁LADDER - ▁BOTH - ▁CLASSROOM - ▁STUDIED - ▁HALL - MAS - ▁STARTED - ▁THO - ▁REFUND - ▁EARLY - ▁MARK - ▁TRIED - ▁CRY - ▁CUP - ▁DEAL - ▁LEGS - ▁PARTNER - ▁NINE - ▁MONTH - ▁CRYSTAL - ▁MRS - ▁WHOM - ▁QUIET - ▁TICKET - ▁TRYING - ▁JELLY - ▁TEST - ▁OFFICE - ▁BICYCLE - ▁HOSPITAL - ▁POOL - ▁DOGS - ▁LIVES - ▁NOISY - ▁TASTE - ▁FEET - ▁PASTA - ▁HANS - AL - ▁PAST - ▁PRIZE - ▁KEY - ▁COUPON - ▁TIMMY - ▁AREN - ▁MEMO - ▁TEACHE - ▁PRACTICING - ▁ANIMAL - ▁MOUTH - ▁WORLD - ▁UNDER - ▁WATCHING - ▁FELL - ▁DRIVE - ▁BEACH - ▁CLEAR - ▁JOKES - ▁GAVIN - ▁ADD - CLOCK - ▁HELPER - ▁JULIE - ▁WEIRD - ▁SINCE - ▁MILLER - ▁TIE - ▁FRUITS - ▁HOUR - ▁ANIMALS - ▁TWICE - ▁WARM - ▁LARGE - ▁UNTI - ▁JAMES - ▁DOLLARS - ▁STORIES - ▁MEAL - ▁APPLES - ▁CRYING - ▁DIET - ▁HEADPHONES - ▁MEMORI - ▁COMPLIMENT - ▁TRIANGLE - ▁DIARY - ▁TOWER - ▁EYE - ▁SALE - ▁BUILT - ▁CARROT - ▁ORDERED - ▁ITEM - ▁SLOW - ▁NAOMI - ▁TUESDAY - ▁SENSE - ▁PARENTS - ▁GIV - ▁BUSINESS - ▁EVER - ▁TYLER - ▁FORWARD - ▁CELL - ▁SHUT - ▁COAT - ▁PRINCE - ▁HATE - ▁PUPPET - ▁FULL - ▁WOULDN - ▁TERRIBLE - ▁CARDS - ▁MAP - ▁STAMP - ▁SNACK - ▁SNOW - ▁RUBY - ▁SLOWLY - ▁EDDY - ▁EASILY - ▁LAZY - ▁BLOCKS - ▁EARS - ▁COLORS - ▁TTEOKBOKKI - ▁CAREFULLY - ▁MARRIED - ▁VILLAGE - ▁HEADACHE - ▁MOUNTAIN - ▁PETER - ▁FAT - ▁MARRY - WEEN - ▁RYAN - ▁DISHES - ▁JIM - ▁FIELD - ▁CINDY - ▁FEW - ▁STARS - ▁UMBRELLA - ▁GROW - ▁FROG - ▁RULER - ▁BASKETBALL - ▁PART - ▁ORLANDO - ▁CORRECT - ▁GRANDPA - ▁ADVICE - ▁ARMS - SE - ▁PHOTOS - ▁KICKBOARD - ▁JACOB - ▁DANGER - ▁BOOTS - ▁GIANT - ▁BATH - ▁VISITOR - ▁PROMISED - ▁SNAKE - ▁GLASS - ▁RAISE - ▁SPICY - ▁TURNED - ▁MEETING - ▁VIOLIN - ▁MINUTE - ▁DAISY - ▁BUTTON - ▁OTHERS - ▁DELIVERY - ▁WASN - ▁JOGGING - ▁SOFA - ▁FINGERS - ▁NICOLE - ▁TALLER - ▁RUNS - ▁BENJAMIN - ▁GOLD - ▁LUCAS - ▁SNOWMAN - ▁LOVED - ▁SANDWICH - ▁STRAIGHT - ▁AGAINST - ▁BALLOONS - ▁KEPT - ▁CLOSED - ▁PENS - ▁MAX - ▁LEG - ▁FILL - ▁QUIT - ▁ANYBODY - ▁JEFF - ▁ANN - ▁EVAN - ▁MISSED - ▁TAEKWONDO - ▁JOY - ▁PUSH - ▁WOODWARD - ▁ROSS - ▁LISA - ▁PULL - ▁NECTAR - ▁VASE - ▁RABBITS - ▁BOW - ▁BUGS - ▁SAFE - GETTING - ▁CASH - ▁LAMP - ▁DOLLS - ▁YUMMY - ▁MEDICINE - ▁SPORTS - ▁ENDS - ▁BASEBALL - ▁THROUGH - ▁CENTER - ▁FIGHTER - ERS - ▁PACKAGE - ▁WORMS - ▁SHAPE - ▁DISAPPOINTED - ▁PHILLIP - ▁DINOSAURS - ▁SALAD - ▁HAMBURGER - ▁COOKIES - ▁PASS - ▁CHEAP - ▁STAGE - ▁COLORED - ▁TYPE - ▁EVENING - ▁CRIED - ▁SHOWER - ▁WALLET - ▁FIFTEEN - ▁HERO - ▁USUALLY - ▁GATE - ▁TEAM - ▁PLANE - ▁DRESSES - ▁SOLD - ▁CRAYON - LE - ▁HIDE - ▁BODY - ▁MEN - ▁HAIRSTYLES - ▁BOAT - ▁WONDER - ▁RAIN - ▁FEELS - ▁NERVOUS - ▁CHILD - ▁MIRROR - ▁BUG - ▁LONGER - ▁LOUIS - ▁AIR - ▁STOMACHACHE - ▁ASKING - ▁OWNER - ▁KNEW - ▁BELT - I - ▁MAGAZINE - ▁HOP - ▁SUGAR - ▁END - ▁TAKING - ▁LIGHTS - ▁EMPTY - ▁PUPPY - ▁DUCK - ▁SUPERMARKET - ▁APARTMENT - ▁ADDRESS - ▁MACHINE - ▁JASON - ▁CARRY - ▁DRY - ▁EXCITING - ▁BOTTLE - ▁RIDING - ▁CHARCOAL - ▁TRAVIS - ▁UGLY - ▁CAUGHT - ▁PROBAB - ▁PROJECT - ▁LISTENING - ▁JUGGLE - ▁ROPE - ▁BILL - ▁HOURS - ▁MOLLY - ▁SOPHIE - ▁WEARS - ▁LIFE - ▁CAFE - ▁HURTS - ▁RELAX - ▁TED - ▁COPY - ▁COTTON - ▁ALONG - ▁OFFER - ▁DATE - ▁LI - ▁YOUTUBE - ▁JOKE - ▁BARREL - ▁DIED - ▁SINGS - ▁SEVERAL - ▁TALENT - ▁CARTER - ▁PASSWORD - ▁CASE - ▁SCISSORS - ▁YORK - ▁FANTASTIC - ▁CLOUDY - ▁ROUND - ▁BUILD - ▁PRINCESS - ▁RAINY - ▁GRAPES - ▁SKIRT - ▁LION - ▁FASTER - ▁FASHION - ▁AD - ▁EXPLAIN - ▁DOCK - ▁MATCH - ▁BOMB - ▁STADIUM - ▁WOODS - ▁FALL - ▁MAD - ▁TRUCK - ▁STEP - ▁ANSWERS - ▁KIDDING - ▁MOON - ▁BEAN - ▁PICKED - ▁LESSON - ▁KNOWN - ▁HAPPENING - ▁BLUEBERRIES - ▁SANDWICHES - ▁BUTTER - ▁BEDROOM - ▁ABOVE - ▁LEGO - ▁HELENA - ▁FOOTPRINT - ▁SHIP - ▁TAP - ▁HILL - ▁CHURCH - ▁GOODBYE - ▁LEMON - ▁HUNDRED - ▁COWARD - ▁ARRIVED - ▁WATERMELON - ▁BOXES - ▁FINALLY - ▁MAIN - ▁KEVIN - BINGO - ▁BONES - ▁SPOKE - ▁DONUTS - ▁HENNA - ▁LETTERS - ▁PAM - ▁LESS - ▁WEDDING - ▁POCKET - ▁SHY - ▁NOWHERE - ▁MIC - ▁NAMES - ▁SONGS - MED - ▁DECIDED - ▁KITCHEN - ▁SHINING - ▁LOVELY - ▁SEASON - ▁STEAK - ▁DRUM - ▁TEDDY - ▁SHINY - ▁GIRLS - ▁AUDITION - ▁ACTING - ▁NECK - ▁ROSA - ▁SNEAKERS - ▁SHOE - ▁QUITE - ▁HOTEL - ▁LEATHER - ▁WIND - ▁COUSIN - ▁JANET - ▁ONIONS - ▁DEAD - ▁PROUD - ▁PET - ▁HELPFUL - ▁TOILET - ▁FORTY - ▁JAKE - ▁BUTTERFLY - ▁KICK - ▁BIRDS - ▁ABROAD - ▁TEA - ▁STARTS - ▁MEALS - ▁AIRSHIPS - ▁SOFT - ▁MATT - ▁BLANKET - ▁WINDY - ▁PLAYS - ▁COVER - ▁WEIGHT - ▁PURPLE - ▁HIDING - ▁TAGS - ▁F - ▁WHATEVER - ▁AIRSHIP - ▁LIVING - ▁MAT - ▁KINDERGARTEN - ▁POND - ▁LAUNDRY - O - ▁NOTEBOOK - ▁HELEN - ▁SWEATER - ▁TEACHING - ▁FAULT - ▁SQUARE - ▁HONEST - ▁LOUDER - CAME - ▁3 - ▁DROP - ▁GUY - ▁GIRLFRIEND - ▁RAINING - ▁SPIDER - ▁FLYER - ▁WATCHED - ▁B - ▁LOW - ▁COUSINS - ▁OLDER - DY - ▁ROCK - ▁MOMENT - ▁SHEET - ▁LAUGH - ▁BLUEBERRY - ▁NEIGHBORHOOD - ▁GRADE - ▁STICKER - ▁OPENING - ▁ALRIGHT - ▁OFFICER - ▁PI - ▁WEDNESDAY - ▁BITE - ▁CONTINUE - TIME - ▁SAIN - ▁COSTUME - ▁MOVED - ▁BOOKCASE - ▁DENTIST - ▁STOPS - ▁SAM - ▁APRIL - ▁THIRSTY - ▁MOOD - ▁PEA - ▁ENTRY - ▁SERVICE - ▁ABLE - ▁FRIED - ▁W - ▁FLASH - ▁KATRINA - ▁REPAIR - ▁TI - ▁GIMBAP - NDA - ▁ANNIVERSARY - ▁NAMED - ▁WRITTEN - ▁CUSTOMERS - ▁COLLECT - ▁BONGOS - ▁EGG - ▁BAT - ▁RIBS - ▁SAT - ▁RETURN - LIGHT - BACK - CA - NESS - ▁FACES - ▁CALLING - ▁HOLIDAY - ▁HOLE - ▁MILLION - ▁DELIVER - ▁10 - ▁TAXI - ▁HASN - ▁MINDS - ▁DONALD - ▁MISTAKES - ▁SPRING - ▁MENTION - ▁NEITHER - ▁TOWEL - ▁BEANS - ▁WILLIAM - ▁BRIGHT - ▁STOMACH - ▁CANDIES - ▁BURGERS - ▁FEAR - ▁DECIDE - ▁FEVER - ▁FANS - ▁STUDIO - ▁LIAR - ▁BREAKING - ▁SLEPT - ▁TAIL - ▁BURGER - ▁MOVIES - ▁SMOKE - ▁DANIEL - ▁WAITER - ▁PENCILS - ▁CROSS - ▁KOREA - ▁GUARD - ▁LEARNING - ▁SUBWAY - ▁CARS - ▁SKIP - ▁MIX - ▁JEANS - ▁LIST - ▁POST - ▁TRAVEL - ▁BORROWED - ▁AWESOME - ▁RECORDER - ▁FLOUR - ▁COW - ▁CAMPING - ▁DRIVING - ▁FELT - ▁WINNER - ▁CHARACTER - ▁BALLOON - ▁RIDDLE - W - FUL - ▁NECKLACE - ▁GLOVES - ▁CHANGING - ▁CRACKED - ▁DROPPED - ▁ROBERT - ▁BAKERY - ▁GRILL - ▁INVITED - ▁LAND - ▁PORK - ▁TELEPHONE - ▁SKI - ▁GUEST - ▁AMBER - ▁SHARP - ▁KITE - ▁DELI - ▁MART - ANNA - ▁CIRCLE - ▁FLYING - ▁SHAKE - ▁DANCER - ▁POLICEMAN - ▁DESSERT - ▁SHOCK - ▁BLOOD - ▁MENU - ▁BUMP - ▁NOVEL - ▁SKIN - ▁SHOULDERS - ▁MICHELLE - ▁CROSSED - ▁TICKETS - ▁DRANK - ▁OUTFIT - ▁LAKE - ▁PAINTER - ▁ALIEN - ▁RAINBOW - ▁WORE - ▁BAR - ▁BROTHERS - ▁DISH - ▁SIMILAR - ▁DISPLAY - ▁GIRAFFE - ▁FANCY - ▁THIEF - ▁HALLWAY - ▁WAVE - ▁CARROTS - PE - ▁ELDER - ▁SOMEBODY - ▁TRAFFIC - ▁ACTOR - ▁RUMORS - ▁CHOSE - ▁CAUS - ▁DRESSED - ▁ROSE - ▁LYING - ▁PANDA - ▁PEAR - ▁SUGGEST - ▁DECISION - ▁NOISES - ▁TAKEN - ▁GARLIC - ▁CHINESE - ▁ITCHY - ▁SWORD - ▁WAITED - ▁NONE - ▁SIZE - ▁ACCEPT - ▁CAPTAIN - ▁GRAY - ▁IDOL - ▁SMALLER - ▁USUAL - ▁THOUSAND - ▁LONELY - ▁RETURNED - ▁JENNY - ▁PRACTICED - ▁NEEDED - ▁PAIN - ▁RAP - ▁THIN - ▁EVERYWHERE - ▁SUIT - ▁BUSH - ▁SON - ▁COMPLIMENTS - ▁FAILED - ▁RUG - ▁PAID - ▁MANGO - ▁BOYFRIEND - ▁SCARF - ELA - ▁CROWD - ▁ONLINE - ▁GREW - ▁SOCKS - ▁SEAGULLS - ▁USING - ▁MELTED - ▁OIL - ▁ADULTS - ▁KATE - ▁WHISTLING - ▁PRAY - ▁POOR - ▁SAUCE - ▁PACKED - ▁HATS - ▁BUYING - ▁AGO - ▁SCIENCE - ▁TUNNEL - ▁DRESSING - ▁MISSING - ▁FESTIVAL - ▁THURSDAY - ▁PAIR - ▁SITTING - ▁SUITCASE - ▁SHAPES - ▁WILLY - ▁HUGE - ▁SHOUTED - EVER - ▁FAIR - ▁TASTES - ▁CAFETERIA - ▁BINGO - ▁BEGINS - ▁DOLLAR - ▁GRILLING - ▁ALIVE - ▁DINO - ▁LIFT - ▁TOP - ION - ▁STUFF - ▁FROZEN - ▁ACROSS - ▁SEOUL - ▁FRIES - ▁TAUGHT - ▁VIDEO - ▁CREDIT - ▁HAPPENS - ▁RACE - ▁TOUR - ▁SPAGHETTI - ▁SWING - ▁INVITATION - ▁COUNTRYSIDE - ▁STAIRS - ▁HIGHER - ▁RANGER - BAG - ▁PULLED - ▁LIPSTICK - ▁VALLEY - ▁NAP - ▁FUTURE - ▁SILENT - ▁SPEAKER - ▁GIVEN - ▁JUMPING - ▁AUTUMN - ▁HOLDING - ▁BOB - ▁PLANNING - ▁SUPPOSE - ▁CLUES - ▁ANSWERED - ▁STICK - ▁WASHED - ▁CURLY - ▁RUINED - ▁SMILING - ▁UNHAPPY - ▁KIMBAP - ▁CAUSE - ▁CHUNKMONS - ▁REPEAT - STOOD - ▁8 - ▁SHEEP - ▁LOUDLY - ▁SLIDE - ▁KING - ▁LIME - ▁SKATING - ▁SERVE - ▁SAND - ▁POWER - ▁MUSICIANS - ▁RESTROOM - ▁SOMEDAY - ▁GYM - ▁GOD - ▁COOKIE - ▁NUMBERS - ▁WARNING - ▁CLASSMATE - ▁COMPLAIN - ▁LAUGHED - ▁BEES - ▁SAFELY - ▁DESIGNER - ▁ORANGES - B - ▁RETURNS - ▁SPEAKING - ▁GINA - ▁MARTI - ▁FEELINGS - MAN - ▁TULIP - ▁BAZAAR - ▁EMAIL - ▁STRAWBERRY - ▁PRESS - ▁SALT - ▁PHEW - ▁COWS - ▁ENTRANCE - ▁LEAF - ▁PAN - ▁SOUR - ▁DISEASE - ▁OPENED - ▁LUGGAGE - ▁SWIMSUIT - ▁PASSED - ▁ALISON - ▁SHOVELS - ▁SENTENCES - ▁GROUND - ▁STAYING - ▁SALES - ▁JAM - ▁WRAP - ▁LATELY - ▁SHRIMP - ▁TWELVE - ▁CHEAPER - ▁CHECKING - ▁SEAWEED - ▁LO - ▁TURTLES - ▁DNN - ▁WHE - ▁ACT - ▁LIZARD - ▁SUCCEED - ▁STRING - ▁BASKET - ▁HINT - ▁VEGETABLES - ▁FOOL - ▁SHOT - ▁ADULT - ▁GREG - ▁TASTY - ▁FARM - ▁LIPS - ▁STARFISH - ▁NAILS - C - ▁FR - ▁TEARS - ▁SUPERSTAR - ▁CLEANS - ▁HEAT - ▁SILLY - ▁WIG - ▁BELLA - WOKE - ▁5 - ▁BOYS - IVA - ▁IMAGINE - ▁LAUGHING - ▁WASHING - ▁FLAT - ▁STICKERS - ▁PRETTIER - ▁KILL - ▁FLIGHT - ▁WOMEN - ▁MOMMY - ▁CAMP - ▁MEMBERS - ▁CUSTOMER - ▁E - ▁SINGERS - 'ON' - ▁CONTROL - ▁TIGER - ▁ZEBRA - ▁IMPOSSIBLE - ▁CONSOLE - ▁CLUE - ▁FOLD - ▁BEE - ▁ANDY - ▁SEATS - ▁POUND - ▁SANG - ▁DIAMOND - ▁BATS - ▁ARTIST - ▁BABIES - ▁GARAGE - ▁INSTEAD - ▁OLDFASHION - ▁GIFTS - ▁RODE - BIG - ▁MOUNTAINS - ▁THUNDER - ▁DONKEY - ▁PIGEON - ROOM - ▁WORSE - ▁HAMBURGERS - ▁ERASER - ▁TAMBOURINE - ▁BREATH - ▁ANNOYED - ▁HALLOWEEN - ▁KNOCK - ▁STUPID - ▁BANDAGE - ▁PINEAPPLE - OUT - ▁SALTY - ▁POTATO - ▁MILES - ▁COMMENT - ▁TREATED - ▁EAR - ▁SLEDDING - ▁VIOLET - ▁BOTTLES - ▁BRILLIANT - ▁AUNTIE - ▁SPEND - ▁REACH - ▁PAYING - ▁APOLOGIZE - ▁CORNER - ▁FORGIVE - ▁RELIEF - ▁BEHAVE - ▁DIE - ▁PRETTIEST - ▁H - ▁HEN - ▁POUR - ▁NEEDLE - ▁WORRIES - ▁LARGER - ▁CRAZY - TYFIVE - ▁DISCOUNT - ▁HEADED - ▁TWENTYFIVE - ▁SOMETIME - ▁REPORTER - ▁FEED - ▁KIMCHI - ▁TENNIS - ▁DOLPHIN - ▁SUNGLASSES - ▁THREW - ▁COUNTRY - ▁HUSBAND - ▁JAPAN - ▁TOMATOES - ▁OK - ▁POET - ▁LUKE - ▁LEND - ▁LOWER - ▁SHOVEL - ▁AMERICA - ▁BLOSSOMS - OH - K - ▁SAFETY - TALK - ▁ASLEEP - ▁MINER - ▁PERIOD - ▁STORYBOOK - ▁BOWLS - ▁DOUBT - ▁MEMORY - ▁SKINNY - ▁EARTHQUAKE - ▁2 - ▁BALLS - ▁POTATOES - ▁TROUSERS - ▁WAR - ▁FUR - ▁RUMOR - ▁CONGRATULATIONS - ▁EASYGOING - ▁NURSE - ▁FLIES - ▁GROWING - ▁SMILES - ▁CHOICE - ▁ERASE - ▁COMFORTABLE - ▁GUIDE - ▁PE - ▁CLEVER - ▁PEACE - ▁AFTERSCHOOL - ▁SOAP - ▁POPCORN - ▁SUNBLOCK - ▁INVITE - ▁AWAKE - ▁FEMALE - ▁HIKING - ▁FOLLOWED - ▁BUMPER - ▁FILLED - ▁HIPPO - ▁COMEDIAN - ▁SILK - ▁COST - IES - ▁AWFUL - ▁SIBLING - ▁PIES - ▁BURNING - ▁CRASH - ZIPPED - ▁SPACE - ▁LYRICS - ▁HANDMADE - ▁PER - ▁ROUGH - ▁THROWING - ▁STATIONERY - ▁WORM - ▁PAGE - ▁CLASSMATES - ▁EXAM - ▁FINAL - ▁BLOW - ▁CHINA - U - TH - ▁BATTER - ▁HONEY - ▁MISTAKEN - ▁DEPARTMENT - GREAT - ▁SHIRTS - ▁COMPETITION - ▁YOGURT - MBER - ▁DRINKS - ▁WOLF - ▁ISLAND - ▁GROCER - ▁SHARON - ▁BREATHE - ▁ANNOYING - ▁LIED - ▁SPA - ▁KANGAROOS - ▁ALIKE - ▁PENGUIN - ▁BRIGHTCOLORED - ▁4 - ▁MESSAGES - ▁INVENTION - ▁WIPE - BIRD - ▁PRECIOUS - ▁FLEW - ▁CH - ▁APART - ▁MIDNIGHT - ▁SPEN - ▁SHELLS - ▁GIN - ▁NATURAL - ▁THIRD - ▁BADLY - ▁PLATES - ▁JOSHUA - ▁MIDDLE - ▁SWEAT - ▁TOES - ▁TIP - ▁TEASE - ▁BOOKSHOP - ▁COUGHING - ▁GUN - ▁WASTE - UMOR - AR - ▁SPREAD - ▁GOAT - ▁SPROUTS - ▁BALLET - ▁SNAKES - ▁SCRATCHED - ▁AMONG - DANGER - KGO - NISH - ▁FEE - ▁JANE - ▁TEMPER - ▁CROWDED - ▁BONO - ▁CHEF - ▁SAMPLE - ▁LIONS - ▁RULES - ▁DREW - ▁WORTH - ▁MAGICIAN - ▁GLUE - ▁TOUGH - ▁TOUCHE - ▁TUNA - ▁BAKE - ▁LAUGHTER - ▁HALF - ▁HELMET - ▁UH - ▁COPIES - ▁DIFFERENCE - ▁FORK - ▁STARTING - ▁CRIES - ▁SPROUT - SNOW - ▁SCARE - ▁DRUMS - ▁PHANTOPIA - ▁VOUCHER - ▁FARMER - ▁CHANGES - ▁SPILL - AN - ▁COMPLETELY - ▁PRACTICES - CHAIR - ▁MISSE - ▁RACHEL - ▁SEEK - EST - ▁SISTERS - ▁BLAME - ▁PACK - ▁BOIL - ▁REQUEST - ▁SH - ▁WIRE - ▁POT - ▁ONION - ▁CLOSER - ▁MICE - ▁SCRATCH - ▁DUCKS - THANK - ▁RECEIVE - ▁CABBAGE - ▁SEEDS - ▁JEJU - ▁SUDDENLY - RAY - ▁KIWI - ▁POWDER - ERRY - ▁MESSY - ▁RID - ▁CHAMPION - ▁ARGUE - ▁RECIPE - ▁MICROPHONE - ▁SCOLDED - TRY - ▁STRONGER - ▁EXPECT - ▁WEEKS - AKER - ▁JUMPED - ▁RAINS - ▁OREPHIA - ▁PIGS - LOSING - ▁PRAYING - ▁DUE - ▁SOUTH - ▁PUNCH - ▁CREATIVE - ▁FINISHING - ▁HARMONI - ▁CLOWN - ▁SALON - ▁SINK - H - ▁TOOL - ▁ALARM - VISION - GY - ▁FAIL - ▁DRAWER - ▁HAIRBAND - ▁X - ▁ARTICLES - ▁DEEP - ▁EARLIER - ▁EXTRA - ▁DOWNTOWN - ▁LEFTHAND - PTER - ▁NOODLES - ▁CONSIDER - ▁ACCOUNT - ▁DEER - ▁SEAN - RABBITS - TY - ▁CREAMS - ▁LUCY - ▁BOUN - ▁HORNS - EMENT - ▁NOON - ▁SMILED - ▁NINETEEN - ▁TURNS - ▁MUFFLER - ▁ROAR - ▁HARDLY - ▁SPELLED - ▁SPOTS - ▁SHORTS - ▁JUMPS - ▁RECENTLY - ▁STOLEN - ▁WITHIN - ▁ENGLAND - ▁PENDANT - ▁MARY - ▁AMUS - ▁SERIOUSLY - ▁FALLS - ▁SPOONS - ▁SAVED - ▁STOLE - ▁STUCK - ▁G - ▁DUMPLINGS - ▁GERMAN - ▁PLACES - ▁OCARINA - ▁QUEENSTEIN - ▁BRANDON - ▁DWARFS - ▁TOFU - ▁SPRAY - PARD - ▁CROSSING - ▁PIGEONS - ▁NOTICE - CE - LTY - ▁BASEMENT - ▁TABLET - ▁COUPONS - ▁PROGRAM - ▁SOCK - ▁GUI - ▁NUT - ▁OLIVE - ▁PREFER - ▁MUSHROOM - ▁FIGHTING - ▁DENERGY - ▁STORAGE - ▁POLITE - IST - ▁KICKBOARDS - GAGE - ▁DROWN - ▁MANAGE - ▁DRIVER - P - ▁WEEKENDS - ▁SHOULDER - ▁MUD - ▁SEVENTY - ALLY - ▁POSTCARD - ▁PIECES - ▁HICCUPS - ▁CHARACTERS - ▁CLEANING - ▁DIS - ▁JG - ▁JOSEPH - ▁TITLE - ▁CDS - ▁BOSTON - ▁BRACELET - ▁PERMISSION - ▁STEW - ▁RAT - ▁SKATE - ▁CHEST - ▁FOOT - ▁CLIMB - ▁AUDIENCE - ▁DUFAR - ▁GRANDPARENTS - ▁FIT - ▁TOUCHING - ▁ELEPHANTS - ▁TSHIRTS - ▁APPOINTMENT - ▁FOREVER - ▁STARVING - ▁LESSONS - ▁COUPLE - ▁TOTO - ▁DRINKING - ▁ARRIVE - ▁GREE - ▁SPOT - ▁HELD - ▁EARTH - ▁DAUGHTER - ▁SLICE - ▁CASTLE - ▁FEEDING - ▁COVERED - ▁FAM - ▁AGE - ▁AUSTIN - ▁DEAR - ▁NATI - ▁CELEBRATE - ▁MEATBALLS - ▁STRETCH - ▁SOLVE - ▁USEFUL - ▁SCAR - DDING - ▁ALLERG - ▁RINGING - ▁SAILING - ▁SNOWING - ▁LATEST - ▁LIES - ▁ACADEMIES - ▁MUSICIAN - ▁STA - ▁FROGS - ▁STOMP - ▁KEYBOARD - ▁FAIRY - ▁CLAP - ▁HAM - ▁TOWARDS - ▁RESERVATIONS - ▁SHOUT - SORRY - ▁PUPPIES - ▁WEAK - ▁ORIGINAL - ▁RESPECT - ▁TABLES - ▁COMPUTERS - ▁TOWELS - ▁CRAFTSMEN - ▁ELE - ▁REPAIRED - ▁PRINT - ▁BLOOM - ▁WISELY - ▁SCOLD - ▁TWINKL - ▁CANCEL - ▁KIM - ▁STAINED - ▁LAP - ▁DRI - ▁SHARK - ▁KANGAROO - MENTARY - THEY - ▁DALLAS - ▁SEESAW - ▁WHISPER - CAL - ▁DWARF - ▁SUNDAYS - ALK - ▁DOUBLE - ▁SHAKING - ▁PREPAR - ▁YOYO - ▁SKILLS - ▁OCTOPUS - ▁INSTRUMENTS - ▁MAIL - ▁ALIENS - ▁JESSI - ▁CHERRY - ▁INCONVENIENCE - ▁CERTAIN - ▁BEEF - CON - 'OFF' - ▁GATHERED - ▁PRODUCTS - CONVENIENCE - ▁RESTAURANTS - ▁MONKEYS - ▁FIGURE - ▁QUICK - ▁GAIN - ▁PENALTY - ▁INLINE - ▁INTRODUCE - ▁OVERSLEPT - ▁POL - ▁HOWEVER - ▁GORILLA - ▁MEMBER - ▁PLU - ▁ANGER - ▁AQUARIUM - ▁GAS - ELY - ▁TIES - ▁PUNISHED - ▁CUCUMBERS - ▁TINY - ▁RISE - ▁GHOSTS - ▁WIFE - MOND - ▁RARE - ▁BARN - ▁SMELLY - GAN - ▁REASONS - ▁BURNED - ▁ANNOUNCE - ▁CAPSULES - ▁PICNIC - ▁GLOVE - FF - RANCE - ▁TREAT - ▁JOG - ▁BULLS - ▁JJAKGUNG - ▁PROVE - ▁BAGS - ▁RUDOLPH - ▁MC - ▁TRICKS - RIOR - ” - ▁HAPPILY - ▁REMIND - ▁DIVER - BE - ▁HATES - ▁SPOON - ▁SIZES - ▁THROAT - ▁UN - CRAFTS - ▁BRIDGE - ▁CONFUSED - DONALD - KEEPER - ▁SIBLINGS - ▁DENNIS - ▁EMBARRASSED - ▁PATRICK - DWARFS - ▁PREGNANT - ▁VOTE - ▁WHIPPED - ▁10000 - ▁SUPPORT - ▁TOOTH - ▁STANDING - ▁CLOSET - ▁NEEDLES - ▁SWEEP - ▁RAISED - ▁PEE - ▁CONTACT - ▁JEALOUS - ▁SURVEY - BOX - ▁CROSSWALK - ▁WALKING - ▁SOP - ▁SITE - ▁OWE - ▁FOURTEEN - ▁PLANTING - ▁CHANNELS - ▁WIGGL - ▁OURSELVES - ▁SCENE - ▁BAS - ▁LETTUCE - ▁NICKNAME - ▁GRABB - ▁ELEVATOR - ▁COP - ▁FALLING - ▁DESERVE - ▁FILM - ▁SOPHOMORE - ▁WOUND - ▁PROTEST - ▁PEACHES - ▁CHILL - ▁COURT - ▁ROOF - ▁CHARGE - ▁FINGER - ▁HANBOK - ▁TAPDANCE - ▁JAPANESE - ▁MELON - ▁BATTLE - ▁LEAS - ▁PARTS - BATHING - ▁CRUNCHY - ▁PAUL - ▁WHISTLE - ▁CAKES - ▁HEAL - ▁SHELL - ▁GUM - ▁CARPENTER - ▁HEAVILY - ▁N - ▁LEMONS - ▁HARDER - ▁ROW - ▁STEAM - ▁STUDIES - ▁LOTTERY - ▁BITTER - ▁MOW - ▁EATEN - ▁SPORT - ▁SHORTER - ▁STEAL - ▁GRADUATE - ▁PUZZLE - ▁CEREMONY - ▁RAINCOAT - ▁KISS - HAP - WAY - ▁DEPART - ▁LANGUAGE - ▁BITTEN - ▁BUSAN - ▁L - ▁TIGHT - ▁BELOW - ▁PERFECTLY - KE - ▁NATURE - ▁MISUNDERST - ▁CLOUD - ▁DRAG - ▁CARTOON - ▁COCONUT - ▁GOLF - ▁THIRTEEN - ▁DYING - ▁PETE - ▁MALL - ▁BIN - ICAL - ▁ALIB - ▁BREEZE - ▁FRENCH - ▁DATING - ROW - ▁WATERING - ARD - ▁DESERT - ▁PRAISE - ▁INTERNET - ▁STRICT - ▁MOSQUITOES - TLE - ▁SKILL - ▁BEHAV - ▁KTX - ▁LONDON - ▁TASTING - ▁VAN - ▁COUGHED - ▁NICELY - ▁HARM - ▁BOOKSHELF - ▁CRICKET - ▁EDGE - ▁PILLOW - ▁RECTANGLE - ▁STRESS - ▁FOOTBALL - ▁LAW - ▁CHOPSTICKS - WHAT - ▁TWINS - ▁AUSTRALIA - ▁LAMB - ▁MAYO - ▁DESIGN - ▁BLEW - ▁GLORY - ▁ROCKCLIMBING - ▁DUTY - ▁ENTERTAINMENT - ▁THEMSELVES - ▁YOG - ▁BUCKET - ▁BIRTH - ▁FALSE - ▁PATTERN - ▁THREAD - ▁SOLDIER - ▁BATTERY - ▁KNEES - ▁HEADS - ▁DELIVERED - ROUTE - ▁SIMPLE - ▁WATERFALL - ▁SWITCH - ▁EFFORT - ▁UNUSUAL - ▁SLIPPED - ▁REG - ▁SUITS - ▁CHANNEL - ▁MINI - ▁PLASTIC - ▁RECOMMEND - ▁RUBBER - ▁THANKFUL - ▁ROLL - ▁SOLV - ▁CLAPS - ▁BUD - ▁CINEMA - ▁SHELF - ▁LOSS - ▁WOMANS - ▁CANADA - ▁EXPRESS - ▁SHARING - ▁LOOSEN - ▁CHOCO - ▁RUNNY - ▁REPL - ▁BOWL - ▁FULLY - ▁SOMEHOW - ▁UNIQUE - ▁CARES - ▁NOODLE - ▁JETLAG - ▁LAPTOP - ▁TOOTHPASTE - ▁JON - ▁AIRPORT - ▁JOO - YER - ▁CAP - ▁HOLLY - ▁JOHNSON - ▁ZERO - ▁LEADER - ▁OX - ▁SQUEEZE - PY - GET - ▁FIN - ▁ZIP - ▁SEPTEMBER - ▁TEMPERATURE - THIRTY - ▁GOODLOOKING - ▁GUAR - ANTEE - ▁LOG - ▁WILD - ▁BOOTH - ▁PEPPERS - ▁FORGOTTEN - BALL - ▁AB - CALORIE - ▁POLICY - ICO - ▁INCLUDED - ▁LIGHTEN - ▁BLAMED - ▁LONGTIME - OOD - ▁JEAN - ▁DECK - ▁MANNER - ALTH - ▁PERSONALLY - TRUCK - PT - ▁GUT - ▁CRASHED - ▁FLO - ▁REACT - ▁ABSENT - KYO - ▁BLUSH - ▁DONATE - DOCK - ▁COMPLAINING - ▁DESCRI - ▁GEORG - ▁RECOVER - ▁WALNUT - ▁LUNG - ▁BUDDY - ENSE - ▁PASSES - ▁PLUM - HALF - ▁SE - ▁TURTLE - ▁FRANC - ▁KOALA - ▁TURKEY - ▁CARPET - ▁ANYWHERE - ▁R - ▁SKIING - ▁FOCUS - ▁HARV - ▁JANUARY - ▁PRESIDENT - ▁TWENTYONE - ▁WRESTLE - ▁CANCER - ▁CHEATING - ▁HOMEMADE - ▁WEEKDAY - ▁K - THER - ▁DREAMS - ▁APPRECIATE - ▁BRAIN - ▁SAUSAGES - SOMETHING - GAR - ▁SMOOTH - ▁SLIM - ▁FENCE - JURY - LIES - ▁SPIDERS - EADLINE - EVEREST - ▁SCORES - ▁JOKING - ▁REJECT - ▁STEPMOTHER - ▁CRIM - ▁DIGGING - ▁QUEEN - ▁MALE - ▁SNORES - ▁EXPLAINED - ▁HOUSEWORK - ▁BEDTIME - BEAT - WORKING - ▁SMELLING - ▁GRAPE - ▁INSTRUCTIONS - ▁SUNSCREEN - ▁WORKDAY - ▁HOLES - ATER - UP - RIDA - ▁VINE - ▁HERSELF - ▁NIGHTMARE - ▁SNAP - ▁INSU - ▁BURNS - GIV - ▁MOUNT - ▁NEGATIVE - ▁ADVANTAGE - ▁DIFFICULTIES - ▁7 - ▁REMAINS - CHECK - ▁TRAVELING - ▁IMAGIN - G - ▁BENNY - ▁JOHN - ▁ATHLET - ▁COOPE - ▁DICTIONARY - ▁HAPPINESS - ▁RAPPER - ▁SLIPPERY - ▁SUNRISE - ▁TAPDANCING - ORABLE - ▁NOTICING - ▁WAITLIST - ▁CUCUMBER - FTH - ▁GUESTS - ▁COLLEGE - ▁STOCK - HH - ▁TALE - POP - ▁MEXIC - ▁FREEZER - ▁REFUSE - ▁SWIMMER - ▁THOUGHTFUL - DIVING - WORKED - ▁COURAGE - ▁ERRANDS - ▁LISTENED - ▁GRUM - ▁WEB - ▁TWEL - GED - ▁CABIN - ▁REHEARSAL - ▁SKETCHBOOK - ▁DAYCARE - ▁PARTIES - OBBY - ▁SEAL - WHERE - ▁ROSES - INE - ▁ACCIDENT - ▁PERSONALITY - ▁SPECIFIC - ▁RINGS - ▁BLOOMED - ▁AW - YARD - ▁ENTERED - ▁BELLY - ▁FUNNIER - ▁NARROWMINDED - USY - ▁JOURNAL - ▁JER - ▁PRICES - BREAK - ▁BILLS - SOLUT - ▁11 - ▁REFILL - ▁BAKED - ▁ALPHABET - CONNECTED - ▁GOATS - ▁WASHE - ▁CHOP - PHLE - ▁NONSENSE - ▁WADDL - ▁PETS - ▁DECORATE - LUSH - ▁FORGETTING - ▁EMILY - ▁BICYCLES - ▁SHOWN - ▁BUCK - ▁BAIT - ▁100 - ▁MOVER - ▁HEL - ▁WINNING - ▁ROCKET - ▁FANG - ▁CA - ▁DEPRESS - ▁BEAUTY - ▁DAILY - ▁ENGINEER - ▁MUFFIN - ▁WRITER - ▁OPINIONS - ▁TRACKS - ▁PAUSE - ▁PUZZLED - URE - SEY - ▁WRAPS - ▁SOCIAL - ▁GRADES - ▁WARMLY - ▁YOYOS - ▁CHEW - ▁BULGOGI - ▁BARKING - ▁SENTENCE - ▁THOUGH - ▁POO - ALIAN - ▁EVE - ICED - ▁RAIS - ▁DISTURB - ▁ITSELF - ▁ORIGAMI - ▁TISSUE - ▁JOHNNY - ▁BURN - ▁COOKS - ▁CANDLE - ▁OBVIOUS - ▁SANDPAPER - ▁SUPPLIES - ▁CHEWY - ATIONS - ▁FLAVOR - ▁KIWIS - ▁MASTER - ▁YELLING - ▁CUPS - ▁BL - LAINE - ▁STIMULAT - ▁TIRES - ▁PRETEND - ▁CLEANED - ▁RUSSIA - ▁FRECKLES - ▁FART - ▁CHEETAH - ▁RUDE - ▁TRAINS - ▁LOTTE - ▁PAGES - ▁POSTCARDS - ▁KEYS - ME - ▁BOOKSTORE - ▁HOST - ▁SHORTCUT - ▁SHOOTS - ▁OPINION - ▁APRON - ▁COPIED - LLOWED - ▁STICKY - ▁PREPARE - ▁HEADQUARTERS - ▁REPAIRS - ▁WHALE - ▁POOP - ▁RESEMBLE - ▁SHARE - ▁LOLL - ▁EXERCISES - ▁PROGRAMS - ▁BLINK - ▁FLAG - ▁LAY - ▁FASTEST - ▁SNEEZE - ▁ENDED - J - ▁MARKER - HER - ▁ASSISTANT - ▁CURRY - ▁PURSE - ▁SLIPPERS - ▁UNDERSTANDING - ▁PIT - ▁INDOOR - ▁CROWN - ▁CURIOUS - ▁SYSTEM - ▁CABLE - ▁MOSQUITO - ▁PHARMACY - ▁EVERLAND - ▁WINDOWS - ▁BOOGER - ▁TIRING - ▁PAPERS - ▁PEANUT - ▁PARDON - ▁AH - ▁FOX - ▁RESELL - ▁RESULT - ▁TWIST - ▁SLED - ▁TALLEST - ▁RIBBONS - ▁RECEI - ▁SQUIRREL - ▁CUTLET - ▁HEIGHT - ▁HURTING - ▁TRAP - ▁WRAPPER - ITED - ▁FRIGHTENED - ▁PATIENT - ▁CANCELED - ▁SHELVE - ▁NET - OOPS - ▁MESS - ▁MERRY - ▁PLATE - ▁COMPLAINT - ▁SITUATION - ▁PARIS - ▁STRAW - ▁DIVIDE - ▁GOAL - ▁SHRIMPS - X - SPECIAL - GOTTEN - F - ▁COLLECTED - ▁AFFORD - ▁HUNG - ▁CHAMBER - ▁AIRPLANE - ▁CHA - ▁WALLS - ▁REGULAR - ▁EXPERIENCE - ▁PILOT - ▁250 - ▁LEMONADE - ▁FURTHER - ▁RAC - IN - ▁SWALLOW - ▁CLOSING - ▁CLASSROOMS - ACK - ▁RENT - ▁ADS - ▁TENTH - ▁FRY - ▁HOTDOG - ▁ANGEL - ▁PEACH - ▁HIDDEN - ▁GOOSE - ▁SMALLEST - ▁ROCKS - ▁COOKED - ▁CORN - ▁SIGNS - ▁ANXIOUS - ▁LIGHTNING - ▁SNOWBALL - ▁BESIDE - ▁ANTS - ▁ALLOWANCE - ▁COUNTRIES - ▁POUCH - ▁SLIP - ▁POEM - ▁RAMEN - ▁ROLLING - ▁PATIENTS - ▁SCREEN - ▁PRESENTATION - ▁CAST - ▁FLUTE - ▁HU - ▁ZEBRAS - ▁COMPARE - ▁WIDE - ▁FORSYTHIA - ▁SENIOR - ▁DONATED - ▁FACTS - RD - ▁FOG - ▁ROLE - ▁PEARS - ▁BUTTONS - COME - ▁HAIRCUT - ONDE - ▁ENV - ▁CHASED - THE - '4' - ▁TRACK - ▁STRANGER - ASOL - ▁CHIN - ▁PUBLI - ▁DUN - ▁JUNE - ▁20 - ▁DOUGHNUT - ▁DADDY - PORT - ▁EMBARRASSING - ▁UNCOMFORTABLE - ▁FOREHEAD - ▁RELATIVES - ▁DOODLE - ▁GENTLEMAN - ▁TAPE - ▁BANKER - ▁ACTRESS - ▁SORT - ▁REDESIGN - ▁GRADERS - ▁KICKING - ▁LA - UK - ▁BARBECUING - ▁BULLY - RATE - ▁JUN - ▁KOREANS - ▁CORPORATION - ▁HEAVIE - ▁IMPROVE - ▁OCEAN - ▁LG - ▁LAYER - ▁BRIGHTLY - ▁CRABS - ▁PAR - ▁BLANK - ▁CALENDAR - ▁CROCODILE - ▁SALARY - ▁CHUSEOK - ▁CUTEST - ▁NOR - ▁MYSTER - ▁BEND - ▁INCLUDE - ▁EXCELLENT - ▁PAINFUL - ▁SKEWERS - ▁CHEERING - SIZE - BELT - RCH - ▁PLEASANT - ▁PATH - ▁QUALITY - ▁STINGS - ▁REPAIRING - ▁DELAY - ▁RIDES - ▁ELSA - ▁SECURITY - ▁TWENTIETH - ▁PC - AH - ▁NOTES - RAL - ▁NORMAL - ▁DIRECT - ▁CENT - ▁APOLOGY - ▁GARBAGE - ▁GEE - ▁WATCHES - ▁SCISSOR - ▁CULT - ▁ECONOMY - ▁SEASHELL - ▁HA - ▁HORSES - ▁WHEELS - BYE - ▁HABIT - ▁VI - OOKIE - ▁BAKING - ▁CHERISH - ▁JESUS - ▁KLEA - ▁PARTICIPATE - ▁NICER - ▁LISTING - ▁SUPP - IELD - ▁CRISPY - ▁EYESIGHT - ▁TWITCH - ▁WORST - ▁GREETING - ▁DRYER - ▁LINES - ▁DEPRESSED - RENT - ▁ROLLS - LAND - ▁DOCUMENT - ▁COCKROACH - ▁TAX - ▁LIBER - ▁FRIGHT - ▁GARDENVIEW - ▁JAR - ▁ONESELF - ▁PELICAN - ▁RUSH - ▁BAKER - ▁EXPLODED - ▁CARNATIONS - ▁BUBBLES - ▁BREAKS - ▁EUROPE - ▁EXCHANGE - ▁SMASH - ▁TORONTO - ▁CEO - ▁BLEEDING - ▁IMAGINED - ▁KIL - ▁POU - ▁TAB - ▁CRUS - OGRAMS - ▁ALASKA - ▁FROWNED - MAIL - TWINKL - ▁SINGLE - ▁INVENT - ▁ROD - ▁EMERGENCY - PORTER - ▁COMB - ▁HUG - TI - '...' - SMITH - ▁AVOID - ▁JJAKKUNG - ▁MATERIALS - ▁LOSES - ▁LU - INA - FREE - ▁SERV - ▁FLU - ▁REEL - ▁BACKPACK - ▁REPRINT - ▁SIXTEEN - ▁ZENA - ROL - ▁AWARD - ▁TENK - ▁NETWORK - ▁WORKER - ▁REDUCE - GUE - ▁PROTECT - ▁CONCERN - ▁CRIMINAL - ▁FIREFIGHTER - ▁INCHEON - ▁SUWON - ▁VIEWER - OVER - ▁ELEVATORS - OR - ▁IMPRESSED - ▁SHAME - ▁STRAP - ▁YIELD - ▁WARNED - ▁HANDOUT - ▁LUNCHTIME - URY - IED - AY - WIFE - GUN - ▁ISSUE - RRIE - ▁SANDCASTLE - ▁FIGURES - ▁LOV - ▁POKE - ▁FREESTYLE - ▁CHAIN - ▁EVERYDAY - OK - ALY - ▁RATING - ▁SPIT - ▁SAIL - ▁AMBULANCE - ▁ENORMOUS - ▁SELFCONT - ▁MEMORIZED - ▁GIRAFFES - ▁SNOWS - ▁PLANTS - ▁LEAD - ▁EXHIBITION - ▁FOUGHT - ▁MARBLE - 'YES' - ▁PICKE - ▁WRONGLY - ▁HURR - ▁CONVERSATION - ▁DETAIL - ▁WORRYING - ▁SAVING - ▁TU - ▁SECRETLY - AWAY - ▁GROWS - ▁CONTRA - ▁SCRAMBLE - BES - ▁PROMISES - ▁CHAIRS - ▁GOGGLES - ▁OTHERWISE - ▁VICTOR - ▁THORNS - ▁WORTHWHILE - ▁HIPPOS - ▁TRICK - ▁OBSERVATORY - ▁SHAMPOO - ▁COKE - ▁DRAMA - ▁DELAYED - ▁GUTS - ▁AZALEA - ▁WRAPP - TIE - HEAD - ▁BIGGEST - ▁ENEMIES - ▁PUMPKIN - ▁DOCUMENTARY - ▁ATOPY - ▁COUGH - ▁TOUCHED - ▁AWARDS - EWER - VER - ▁BEARS - ▁CACTUS - ▁LOCK - ▁LIT - ▁SKETCH - ZEN - ▁DRAGG - ▁SQUEEZED - ▁SCOT - SHY - ▁CALCULAT - ▁APPEARED - ▁RAINED - ▁WINGS - ▁CLOTH - ▁DIG - ▁DONGSENG - ▁SPONGE - ▁STUBBORN - ▁WAIST - ▁FLE - ▁TAG - CH - ▁CR - ▁UMBRELLAS - ▁TOOTHBRUSH - ▁POCKETS - ▁PAJAMA - ▁HALLA - ▁GATHER - ▁BOSS - ▁DETERGENT - ▁DOCUMENTS - ▁GENEROUS - ▁TOTAL - ▁CURTAIN - ▁PUDD - ▁THICK - NSIBLE - ▁HOLIDAYS - ▁TICKLES - FLAVORED - ▁COVID - ▁GIFTWRAP - ▁BLINKING - ▁JUNG - HOK - LEANING - ▁IDOLS - ▁DRO - ▁FOUNTAIN - ▁PHYSIC - ▁PRESCRIPTION - ▁LATTE - ▁TONGUE - ▁NA - WORLD - ▁SURGERY - ADLINE - ▁STUFFY - ▁WAFFLES - ▁15 - ▁LOGO - ▁SHORTCUTS - ▁RESPECTED - ▁INVENTIONS - ▁ARTISTS - RAFFI - ▁FOSSIL - ▁GOLDCREST - ▁MALTESE - UGGING - ▁BUCKWHEAT - ▁PROFESS - ▁SQUID - ▁CORRECTION - IT - LOOKING - ▁GENIUS - ▁WHALES - ▁OPPA - ▁DONKEYS - ▁ELECTRIC - ▁FAKE - ▁JUNIOR - ▁MEDAL - ▁SONGPYEON - ▁MO - ▁LOCKED - ▁MEMORIZE - ▁DIZZY - ▁CAMELS - ▁Y - ▁CARING - ▁PERFORMANCE - ▁ERRAND - ▁STRIPE - ▁SIL - ▁REDESIGNED - ▁TIPS - SCRIPT - ▁BISCUIT - ▁TORN - ▁BRUSHE - ▁STREETS - ▁RELIEVED - ▁HOPS - ESSER - ▁INSTRUMENT - ▁ADVANCE - ▁GESTURE - ▁MUGWORT - ▁PROMOT - ▁PIN - ▁SHAD - IONAL - '72' - ▁HEAVEN - ▁SLOPE - ▁HAIRDR - YOU - ▁OWNERS - ▁PLANS - ▁SUNFLOWERS - ▁CHIMNEY - ▁HIPHOP - ▁FOURTH - ▁C - ▁COUNTS - ▁BARK - SCOPE - ▁ATOPIC - ▁DEATH - ▁FORMALLY - ▁TWIN - ▁QUIETLY - ▁TEAS - ▁MIN - ▁CE - ▁DEPENDS - ▁TRANSFERRED - ▁HANDY - ▁CLEARLY - CHOCO - ▁HOTDOGS - ▁FROWN - ▁RUB - ▁PERFORM - ▁ATTRACT - ▁DUST - ▁REVIEW - ▁SIGNBOARD - ▁ENDURE - ▁RIDD - CKED - ▁CIRCLES - ▁AIRPLANES - ▁MI - GING - Q - ▁YURI - ▁30 - ▁OFFICERS - ▁ALMONDS - ▁SOLVED - ▁WEREN - ▁ALBUM - ▁UNDERGROUND - ▁WRINKLES - IL - ▁TALES - SOKCHO - ▁GROCERIES - ▁RECEIV - ▁BARE - ▁PEEL - ▁COCKROACHES - ▁DEEPLY - ▁STATIONS - ▁DANCED - ▁CHUBBY - ▁SATURDAYS - ▁WING - ▁CRAFTSMAN - ▁OCCASION - ▁WINE - ▁TELE - ▁BLUETOOTH - ▁DISAPPEARED - ▁SUBM - ▁FARTED - ▁PREPARED - LIST - ▁CONDITION - ▁PORTRAIT - '23' - ▁POINTS - ▁TAMBOURINES - ▁TEND - ▁SELFISH - ▁SUBJECT - RUPTE - ▁LICKING - ▁WATERMELONS - ▁DIES - ▁BLOWING - ▁SOIL - NIFE - ▁BLAND - ▁RECYCLED - ▁SIXTY - ▁LENGTH - ILING - ▁SURVIVED - ▁HABITS - WANT - ▁GRAND - ▁SAVORY - ▁APPLAUSE - ▁APPLY - ▁MEANER - ▁DISEASES - ▁FRUSTRATING - ▁NOTIFICATION - ▁CHEOMSEONGDAE - ▁BADGE - ▁ABOARD - ▁DISNEYLAND - ▁LEE - ▁SHARPEN - ▁KETTLES - ▁HERESY - ▁CRAM - ▁BRONZE - ▁HARSH - ▁EBS - ▁GREY - ▁POSE - ▁PICKLES - ▁LEN - ▁TIGERS - ARY - ▁CLAR - ▁EDUCATION - ▁NEIGH - ▁ADDITION - ▁REASONABLE - ▁DUMPING - ▁SPACES - ▁LIGHTER - ▁SPELLING - Z - ▁CATCHING - ▁LEVEL - ▁UPSTAIRS - ▁RINK - ▁HANDLE - AVING - ▁BOWED - ▁BEAUTIFULLY - ▁FARTS - ▁BOLT - ▁FAMILIAR - BBLE - DO - ▁FILE - ▁TREATMENT - ▁PASTOR - ▁EEK - ▁BLOOMING - CIAL - TRAINED - ▁APPEAR - ▁KNEE - ▁WHEEL - RIAN - ▁ATTEND - ▁CONFESS - ▁DVD - ▁WITNESS - ▁BATMAN - ID - ▁BANGS - ▁YARD - ▁LOTION - ▁RECYCLE - ▁PRI - ▁BURDEN - ▁SCRA - ▁VEGETA - ▁TOENAILS - SUALLY - ▁YAM - FORD - ▁FORMAL - ▁POK - ▁FROZE - ▁MULTIPLICATION - ▁SEJONG - ▁TRIES - ▁SUNSHINE - ▁HERBS - ▁STRIPES - ▁CLIMBING - ▁SKIPP - FFE - ▁DAMAGE - ▁RIDICULOUS - ▁QUACK - ▁PINNOCHIO - SIDE - ▁STANDARD - ▁TRADITION - GIANT - ▁YELL - ▁SUPER - ▁OVERREACT - ▁PERFUME - ▁UNDERCOOK - BEC - ▁MAPS - ▁PARTNERS - ▁SPINACH - ▁TTEOKGUK - ▁JAJANGMYEON - ▁DIRECTLY - VATE - STEE - ▁MOUSES - ▁SNOWED - ▁IGNORE - GIFT - ▁LOCKER - ▁SURVIV - ▁P - BBLES - DAIRY - ▁TOOLS - STAR - LING - ▁BB - ▁ACCESSORIES - ▁NINTENDO - ▁BIBIMBAP - ▁DERMATITIS - ▁ANNOUNCED - ▁LICK - ▁AZALEAS - ▁PEPPER - VAS - ▁BODIES - ▁EXPAND - PED - FLOWING - ▁MIXED - ▁GROUP - ▁SAUSAGE - ▁CEREAL - ▁EASIEST - ▁OVERSLEEP - ▁SATISF - ▁150 - ▁BAY - ▁DIP - UN - AK - ▁COINS - ▁SURPRISES - ▁WAK - OL - ▁EVILDOING - ▁EYEBROWS - ▁HEADBAND - ▁KETCHUP - ▁PROPERLY - ▁STRAWBERRIES - ▁UNFORTUNATE - ITY - LIKE - ONG - ▁WISHES - ▁CONSTRUCTION - ▁RESEARCH - ▁RIPPED - ▁FOREIGNERS - ▁SANDALS - ▁GOLDEN - ▁PERFORMANCES - ▁STEALING - HA - ▁SPARE - ▁KPOP - ▁LEASH - ▁TIGHTLY - CM - ▁COMME - ▁500 - ▁ANCHOVIES - ▁BANKBOOK - ▁COVIDNINETEEN - ▁DEFINIT - ▁UPRIGHT - ▁MISSION - BAL - PHONES - HO - ▁GENERAL - ▁OVEN - ▁MARCH - V - HU - ▁GROWN - ▁BROADCAST - ▁GANGWONDO - ▁REFRESHING - ▁DICE - ▁RACK - ▁PERM - ▁SUITCASES - ▁16 - ▁ENVELOPE - ▁HOOKED - ▁ROOT - ▁TEXT - ▁CAGE - GO - ▁MUS - ▁DOUGHNUTS - ▁WASTING - ▁BETIAN - ▁PRESENTING - ▁BRUISE - ▁ALOUD - ▁AUDITORIUM - ▁BTS - PLE - RAISED - MOTION - ▁GENTLE - ONIA - ▁EASIER - ▁FONDUE - ▁SEASICK - ▁VR - ▁DOLPHINS - ▁MATCHES - UR - ACHE - ▁CICADAS - ▁LEAN - ▁REPORTS - YING - ▁CLOUDS - ▁WOLVES - ▁HEEL - ▁FRESHMAN - ▁SCREAMED - ▁RELATIVE - ARIN - ▁BUR - ▁PASTE - ▁FRIENDLY - ABLE - ▁VISITING - ▁INVIT - ▁LOUDSPEAKERS - ▁NNN - ▁OINTMENT - ▁SWAN - CLES - ▁GARDENING - ▁HICCUP - IM - '0' - ND - BA - ▁JULY - ▁SEMESTER - ▁SUSHI - ▁UNIVERSE - ▁TOSUN - ▁PILLS - ▁TAN - ▁NEAT - ▁FEATHER - ▁ANNEX - ▁PENGO - ▁SICKNESS - ▁CANDLES - LO - ▁SCRUB - ▁SHOOT - ▁TH - ▁CRACK - PLAIN - ▁FRIDGE - ▁ANSWERING - ▁INDOORS - ▁APOLOGIZED - ▁COMEDIANS - ▁WOR - ▁SPIN - ▁DRACULA - ▁DRAGONFLIES - ▁EXTINGUISHER - ▁GRADUATION - ▁LADIES - ▁EX - ▁PLANNED - ▁50 - ▁MILLIONS - ▁TANGERINES - ▁DRAWN - ▁CLEANER - ▁DECORATIONS - ▁SPI - ▁VARI - ▁DRAGONFLY - ▁SCENT - ▁GAYAGEUM - ▁CL - ▁MONTHS - ▁PAJAMAS - ▁RESTING - ISE - ▁BADGES - WORK - KY - ▁ADORES - ▁COLA - ▁MOTOR - ▁PRODUCE - ▁THOROUGHLY - ▁VOWELS - ▁COMMON - PING - ▁SUNFLOWER - ▁FOLDING - ▁DECORAT - '8' - ▁SCREAM - ▁CONNECT - ▁AUGUST - ▁PURPOSE - ▁PIAN - ▁CHIMNEYS - ▁MONDAYS - JU - ▁BEETLE - ▁PEED - ▁INTEREST - ▁BAN - ▁SNOR - ▁MA - ▁SEW - ▁COIN - ▁HAN - ▁ALPHABETS - ▁TONKATSU - ▁HOPEFULLY - ▁ICECREAM - ▁REGULARLY - ▁GALBI - ▁CHAS - ▁REALIZE - ▁WORKERS - ▁BOATS - ▁INTERRUPT - ▁SUBTRACT - ▁ORGANIZING - ▁HISTORIC - ▁POTTER - ATION - ▁CHARGER - ▁BAL - ▁SUNLIGHT - ▁DYE - ▁SHOELACES - ▁EVENLY - RY - '30' - BIKE - ▁CRAWL - ▁CHOOS - ▁ROBBINS - ▁SHOOK - ▁SPLASH - ASKIN - ▁UNTIE - YMP - ▁STING - IOUS - ▁PA - ▁CAROLS - ▁SUDDEN - ▁MACKEREL - ▁NOSEBLEED - ▁SCREW - ▁HANOK - TOMS - ▁STRA - DAY - ▁RIBBON - MILKY - BEAN - ▁TOMATO - ▁NATIONAL - ▁SPRITE - ▁PANIX - ▁WISE - ZED - ▁CHEWING - ▁FOOTS - ▁SHAKES - ADA - 'NO' - ▁DIFFERENTLY - SLEEVE - ▁930 - ▁GYEONGJU - ▁RAPUNZEL - ▁ROMANTIC - ▁FARTHER - ▁CAPE - IER - ETY - ▁HARDEST - ▁TURNING - ▁3000 - GENEROUS - ▁BOO - ▁ATTENTION - ▁DWARVES - ▁HAKNYEON - ▁OUTDOOR - ▁RESORT - ▁SWOLLEN - ▁PINCH - ▁PURE - STER - ▁GRAB - ▁BIO - ▁HURRICANE - ▁JUDGE - ▁LANE - ▁OINK - ▁SPRAINED - ▁THIEVES - ▁TRAPPED - BIL - ▁RANCH - ▁TWENTYTH - ▁ANNE - OLD - NIGHT - ▁HEIGHTS - ▁BRICK - ▁GRATEFUL - ▁VITAMIN - ▁HAMSTER - ▁USELESS - ▁INVENTOR - ▁ULSAN - ▁PRETENDING - ▁PANDAS - GGING - UL - AG - COMING - ▁HUNT - ▁REMOVE - ▁OCTOBER - ▁SEPARATE - ▁YAWN - ▁PALE - ▁UM - ▁FLOATING - ▁CO - HAVE - ▁SNOWY - ▁SHOELACE - GRAPHY - ▁MELT - ▁FISHBONE - UG - ▁CHIL - ▁POOPED - ▁YUT - ▁PILL - '0000' - ▁SURVIVE - ▁EXAMIN - ▁TRU - ▁BACKGROUND - ▁BEGINNING - ▁MACARONS - ▁SURFING - ▁VERANDA - ▁ASSEMBLE - ▁HANGUL - ▁REACTION - ▁DAUGHTERS - MENT - QUET - RMALLY - ANG - ▁LID - ▁RESERVATION - SOON - ▁FLIP - CAN - ▁JUICY - ▁KINGDOM - ▁SOCIETY - ▁TADPOLE - ▁JAMSIL - ▁WI - ▁GRADUATED - ▁PRE - ▁SCRATCHING - ▁PO - ▁APPEARS - ILY - FAT - FOOD - ▁DISAPPEAR - ▁FAINT - ▁FLOAT - ▁RUBB - ▁TRANSFER - ▁COMFORT - ▁BALLERINA - ▁DESCRIPTION - ▁GENTLY - ▁HAPPIER - ▁RINGTONE - ▁ARGUING - ▁CONDITIONER - PM - IET - CU - ▁EARTHQUAKES - ▁CHICK - ▁TR - ▁TYPHOON - ▁BUNS - ▁RUNNER - NDC - ▁WAH - ▁JELL - ENDY - ▁COMMU - ▁FARMS - ▁SLEEVES - ▁BEETLES - LOW - ▁MEATBALL - ALKIE - ▁MAGNIF - ▁CONNIE - ▁NEIGHBOR - ▁OPERA - ▁PINOCCHIO - ▁SHOEMAKER - ▁CRAFT - ▁ONESIX - ▁FLOW - WD - HOO - ▁PRESENTATIONS - ▁CHIP - ITE - ▁ANIMAT - ▁DUB - ▁FLOOD - ▁KAKAO - ▁RESU - ▁UNBELIEVABLE - ▁GRIN - ▁HEALTHIER - ▁SIXTH - ▁CHOSEN - ▁LOSER - ▁BLED - REALLY - ▁IGNOR - ▁PRODUCT - RIST - ▁DISCOURAGED - ▁DODGE - ▁FORECAST - ▁OWL - ▁TREASURE - ▁UNIFORM - ▁LOCAT - ▁TUBE - DON - ▁FOLDED - ▁WEIGH - ▁RUIN - ▁CRUSH - ▁PARAD - ▁OBESE - ▁ORGANIZE - ▁PRINCIPAL - ▁RATTLING - ▁RESERVE - ▁RHYM - ▁SIP - ▁UNDERWATER - ▁TAEG - ▁TRAVELLING - ▁STACK - ▁RI - ▁BUNDLES - YEAR - SAME - AND - ▁CHEESECAKE - ▁EPISODE - ▁FAMILIES - ▁FIFTH - ▁RHINITIS - ▁SAUNA - NCHES - ▁EXCE - TIQUE - ▁COMBO - ▁STRINGS - ▁COLORFUL - ▁FLOWS - ▁COOLEST - ▁OPPAS - ATING - ATE - ▁MELTS - ▁CHOPSTICK - ▁BRANCH - ▁FRUSTRATED - ▁GREASY - ▁EXIST - ▁WAVING - ▁APP - ▁SODA - ▁FALLEN - ▁PRO - SHAPED - NG - ▁CONNECTED - ▁12 - ▁BANDAID - ▁DISTANCE - ▁DRAIN - ▁MEASURE - ▁TEMPLE - ▁WORKBOOK - ▁EIGHTAM - ▁WARN - ▁BURNT - BOARD - ▁DE - IFF - RTH - ▁MUSHROOMS - ▁POWERFUL - STICK - ▁VOUCHERS - ▁BLEED - ▁BRAID - ▁CREPE - ▁HAWKING - ▁FLAM - ▁SCORE - ▁RELEASED - ▁TICKLED - BU - FISH - ATIVE - CLUSI - ▁CLINIC - ▁CROOKED - ▁RELAY - ▁SCOOTER - ▁SEBASTIAN - ▁SUFFER - ▁TEENAGER - ▁BATHHOUSE - ▁WRIST - ▁BAKERIES - ▁BRANCHES - ▁SAMYUKGU - ▁SCU - ENDER - ▁INGREDIENTS - ▁INVENTED - ▁BOWING - SSES - WAR - ▁PRESSED - ▁SQUEEZ - SIGNED - WON - ▁70 - ▁APPROACH - ▁CHAPPED - ▁DUMB - ▁FREEZING - ▁MAGNIFIER - ENTIAL - IE - ▁CLOSELY - ▁DIAPERS - OUS - ▁DIRT - ▁CENTIMETER - ▁FLOWERPOT - ▁FOAM - ▁POLITIC - ▁PORRIDGE - ▁PEDIATRICIAN - ▁FIREWORKS - ▁TROUBLEMAKER - ▁PILLAR - ▁EVACUATE - ▁SILLA - EUK - ANDING - ▁FAINTED - ERMAN - ▁SEAGULL - ▁CHICKS - ▁SWEATING - INGO - PAPER - ▁AGREED - ▁CLAPP - VA - ▁STRENGTH - SOONGSIL - ‘ - ▁CONVENIENT - ▁DECEMBER - ▁FORTUNATELY - ▁FURNITURE - ▁HAGWON - ▁LOUNGE - ▁MOKDONG - ▁PALM - ▁SPRINKLE - ▁STIRFR - RUNK - ▁ANKLE - ▁SELF - ▁SEVENTH - LESS - ▁DIVING - ADE - ▁RANG - SHINY - WITH - ▁BRAVELY - ▁BADMINTON - ▁BULGUKSA - ▁KARAOKE - ▁ADMIT - ▁GINGER - ▁LAID - ▁SNOWBOARD - ▁HOPPING - ▁UDO - ▁BULGING - ▁CARP - ▁FACT - ▁GROUPS - ▁ENTERING - ▁RIP - ▁MAR - LOCK - ▁JE - ▁ADMISSION - ▁CHRYSANTHEMUM - ▁DIARIES - ▁DISPOSABLE - ▁LOACH - ▁PARROT - ▁SCULPTURE - ▁TERRIF - ▁VOLUME - ▁REPRESENTATIVE - ▁MEOW - ▁CHEEK - ▁JEJUDO - ▁HARMFUL - ▁BRUISED - ▁MINERAL - AINT - ▁EDIT - WARDS - HY - ▁VIEW - ▁EXACT - ROUGHT - OCKPAPERSCISSORS - ▁CHESTNUT - ▁HAWAII - ▁PIMPLES - ▁REMOTE - ▁SOLUTION - ▁COMPETE - ▁SOFTLY - ▁BUNDLE - ▁LIP - ▁GRADER - WOO - RIS - STORY - DAYS - COLORED - FOR - ▁COLLAPSE - ▁STEPP - ▁BRILL - RSELVES - ▁ACCORDING - ▁BACON - ▁BAEK - ▁BUTTERFLIES - ▁COSMOS - ▁CYCLING - ▁DISTRICT - ▁ESTATE - ▁HUMID - ▁MERMAID - ▁PAPRIKA - ▁PHONICS - ▁BELONG - ▁YUKJANG - ▁ANIMATION - ▁FLIPP - ▁DUMPLING - ▁BLOSSOM - UNG - ▁EXPLORE - ▁INSECTS - ▁JI - HEART - GHTS - ▁ASTRONAUT - ▁BELLHAMMER - ▁LICENSE - ▁NEPTUNE - ▁OPPOS - ▁REFRIGERATOR - ▁STONEBUSH - ▁1000 - ▁APPLI - ▁SUBTRACTION - ▁HOOD - ▁WIDER - ▁BROOM - ▁UNIVERSITY - ▁PRINCESSES - ▁MINT - ▁PARENT - ▁PEEING - ▁ADORE - DONG - ▁SP - ANCE - ▁EXPLOR - TTEOKBOKKI - WHEEL - ▁ABANDONED - ▁CALLUSES - ▁COSMETICS - ▁LADYBUG - ▁MARIA - ▁PRONUNCIATION - ▁BOUQUET - ▁SOGGY - ▁LEFTOVERS - ▁MIKE - ▁TANK - ▁SPAC - ▁FRAME - MADE - IVAL - ▁YE - ▁GATHERING - IAN - ▁KITTENS - IBLE - ▁ABBREVIAT - ▁CHAPAGETTI - ▁ENGINES - ▁EQUIPMENT - ▁INTERSECTION - ▁SANITIZER - ▁DOKDO - ▁GENERATOR - ▁MEDIUM - ▁BALANCE - ▁CHART - ▁TELEVISION - ▁JAJANG - ▁LOLLY - ▁PHOTOGRAPH - ORD - ▁KKA - ▁SOLES - ▁BALM - ▁DECORATION - ▁THORN - ▁ARMY - ▁YU - EEK - NK - BOY - LENGTH - TONY - HEN - ▁RELEASE - ▁LOOSE - ▁COMPLETE - KYOCHON - ▁ARCADE - ▁BRIM - ▁CORONA - ▁CRANE - ▁CUPCAKE - ▁KITCHENWARE - ▁LULLABY - ▁MODER - ▁MUSKET - ▁OBEDIEN - ▁PIKACHU - ▁PROVERBS - ▁SALMON - ▁YUKGAEJANG - ▁TANNED - ▁VILLA - ▁DIRECTIONS - ▁CLAY - ▁ADMIR - ▁DIRECTOR - ▁DAMAGED - ▁BURST - ▁TOPIC - ▁DOODLED - ▁COMPAR - ▁BUBBLE - ▁HO - ▁KISSE - ▁JO - ▁BLOATED - ▁CONSONANTS - ▁DOWNLOAD - ▁ELBOW - ▁FUNNIEST - ▁PORORO - ▁SLOTS - ▁VACUUM - ▁BOTTOM - ▁MANDELA - ▁IMSIL - ▁VIP - ▁TOMMY - EATURE - ▁PINE - ▁EIGHTTHIRTY - ▁HIDEANDSEEK - ▁COLLAPSED - ▁UNDERSTOOD - ▁CRUSHED - ▁TRI - OF - ▁DI - ▁CARNATION - ORY - NAILS - LENT - ▁PUBLISH - PLACE - ▁CLIP - ILLA - ▁SUNSHIN - ▁ACTUAL - ▁SUCCESS - COCK - ▁60 - ▁BENEFITS - ▁CLAW - ▁HAUNT - ▁LIBRARIES - ▁LOTTERIA - ▁MERCURY - ▁MITTEN - ▁SWAM - ▁ROTTEN - ▁SERVANT - DENTAL - ▁LEGEND - ▁ROT - ▁PRICKED - ▁230 - ▁TUB - ▁WINK - ▁HUNTER - ▁SCREAMING - ▁FINALE - ▁SOAPY - ▁REDESIGNING - NNA - ▁DIAPER - ▁BANG - IK - CHAN - TIER - ▁MOR - ▁METERS - ▁HUGG - DAE - FTER - CHO - SHIP - EITHER - CTIVE - ▁KI - ▁RU - ▁BRAND - ▁AMOUNT - ▁EXPLANATION - ▁HAIRPIN - ▁HORRIBLE - ▁INTERIOR - ▁LANDSLIDE - ▁NEVERTHELESS - ▁PERSIMMON - ▁POSTPONE - ▁SCIENTIST - ▁SLACK - ▁STORM - ▁STREAM - ▁SURPRISING - ▁URGENT - ▁ZOMBIE - ▁STOOL - ▁LOAD - NAMBU - ▁ANNOUNCEMENT - IKES - GRAN - ▁ABC - ▁COMPLE - ▁FASCINATING - ▁REMOVED - ▁CRAWLING - ▁INTERRUPTING - RELLA - RAGE - ▁PEELING - ▁HUMANS - ▁MON - ▁BEGIN - ▁VEGETABLE - ▁SLEEVE - GLE - ▁THA - ISH - TRAINER - '7' - ROAD - DRIVER - ▁PRETEN - ▁ALLOW - UZZLE - ▁DEMONSTRAT - ▁STIR - ▁BROC - ▁CARCASON - ▁EQUALLY - ▁EXPERIMENT - ▁HESITAT - ▁SPINNING - ▁MENTOR - ▁ABBREVIATION - ▁RASHES - ▁ASSEMBLING - ▁DUNG - MEMOR - ▁PEACEFUL - ▁HARDENS - OSU - SSUED - ▁FRECKLE - TIOUS - ▁REALIZ - ▁SQUA - LIFE - THINK - ▁BIK - ▁KNIT - ZZA - ▁ALITTLE - ▁BAREFOOT - ▁CONCENTRATE - ▁DALGONA - ▁GUIDEBOOK - ▁KIDZANIA - ▁PALACE - ▁ROSHEN - ▁TEXTBOOK - ▁TUNAKIMBAP - OTTEOK - ▁830 - ▁HOSE - ITIES - NIX - ▁FIFTEENCM - ▁IMAGE - ▁CHEESEKIMBAP - ▁HOTTER - ▁PATT - ▁CLIPPE - ▁FOXES - EAGLE - ▁QUE - NDING - ▁DETER - AP - YEO - UED - ▁PAI - ▁EXCITEDLY - ▁WAVED - ▁BUL - BUT - ▁METER - KIMBAP - HAND - WATCHING - ▁CONVERS - ▁FLICK - ▁PEDIATRIC - NAMENT - REIGN - ▁BIKINI - ▁BUCKWHEATCREPE - ▁JENGA - ▁LAUNCH - ▁OPTICIAN - ▁PIGTAIL - ▁SIMON - ▁SUBSCRIBE - ▁TICKLISH - NELS - ▁PINWHEEL - INATED - ▁DRUG - ▁ONESIXCM - ▁EIGHTH - ▁SMARTEST - ▁HUNTING - ▁PIL - UMMY - ITION - UNNI - ▁SU - ▁POWERFULL - ▁WAFFLE - DIA - ▁TICK - EIGHT - PICKED - FIFTY - WENT - ▁BOT - ▁REPRESENT - OKKI - ▁COCOA - ▁CUSHION - ▁FARTHEST - ▁PENTAGON - ▁SLIDING - ▁SWEAR - ▁MOLD - ▁BBOY - ▁80 - ▁WATERPROOF - ▁RAIL - ▁CREATED - ▁CHIRPING - ▁SEARCH - SEOK - ▁TOAST - ▁BETRAYE - JOR - ▁NI - ZI - ▁SLAMM - ▁GU - ▁NAG - ▁SERVED - UFFY - ▁INSECT - ▁ZIPPE - LP - YEONG - ESSION - IPPED - ▁CELEBRAT - ▁CHANG - '50' - POST - ENTI - ▁DISAPPOINT - ▁QU - ▁FOREIGN - ▁POSSIB - ▁CONGRATULAT - ADOW - ▁TAE - CAFÉ - ▁COURIER - ▁DAEJEON - ▁DOWNSTAIRS - ▁EXPER - ▁PREFERENCE - ▁LACT - ▁OCCUR - ORIENT - ▁SPACIOUS - INARY - ▁KNITTING - ▁LIBERTY - VILLE - RB - ▁BARKED - DAN - ▁TIN - ATOR - ▁PHO - RIED - ▁JINDA - OUND - HOE - ▁STRETCHE - ▁SNEEZ - EVI - QUALITY - MOM - ▁BLIND - HYEON - ECTION - ROKE - ▁ANCHOVY - ▁ASHAMED - ▁COASTER - ▁CONFUSING - ▁CYCLIST - ▁DANDELION - ▁FIREFLIES - ▁HYUNG - ▁KNOWLEDGE - ▁NARACULA - ▁SCAB - ▁VOCABULARY - ▁CONFIDENT - ▁RELAT - ▁FOOLISH - ▁NINEAM - ▁ZO - ▁BOU - ▁FLATTERED - ▁BLINDING - ▁SKATER - ▁ROLLER - ▁FIRM - COTT - NURI - ▁WARMER - ▁LONGEST - ▁TICKLE - ▁AMERICAN - GI - AGGED - CHARGE - TODAY - ▁CREATE - UMPING - JJAEK - ▁BEGINNER - ▁CLICKING - ▁CORRIDORS - ▁DAZZLING - ▁DERMATOLOGIST - ▁DILIGENT - ▁FEBRUARY - ▁FISHBOWL - ▁GARAETTEOK - ▁GARGLE - ▁INJURED - ▁MANTISES - ▁NAKSEONGDAE - ▁ROAST - ▁SNITCH - ▁SLIMMER - ▁DISCHARGE - ▁SOAKED - ▁SELECTED - ▁VICE - ▁INFECT - ▁CONTAINER - ▁NEATLY - ▁STARSHAPED - LOTTEWORLD - ▁SUPPLEMENT - ▁EIGHTTH - ISTERS - ▁TICKL - ▁STRAIGHTEN - ▁SKINN - RANGE - ▁TANGERINE - ▁STO - PREPARED - SPROUT - TWELVE - TONIGHT - ▁RECOGNI - VAN - BEEN - ▁EXPLODE - ▁CHUBB - ANGGU - ▁SAVI - ▁950 - ▁ADJUST - ▁CASTANETS - ▁FAITH - ▁GONGJU - ▁GRAIN - ▁GROSS - ▁JUPITER - ▁MAGPIE - ▁SAIPAN - ▁SKULL - ▁SPARROW - ▁VACCINATED - ▁VIGOROUSLY - ▁AUTOMATIC - ▁NEARBY - SEVENTEEN - ▁TWENTI - ▁NIKE - ▁SEORA - DATORS - ▁PONG - ▁730 - ▁SCARIER - ▁TRUNK - ▁BETRAYER - ▁CHEESEGIMBAP - ONGDAE - ▁SEVERE - ▁SPOONFUL - CTATION - ▁WITCH - ▁LIMIT - ▁EATTTEOKBOKKI - GEOUS - ▁CRAWLED - ▁SUC - AVED - AGE - ▁KITTEN - ▁SKEWER - IZED - ▁TEAR - WAVE - ▁RACI - ▁CONTAIN - ▁TRO - ▁GUGUDAN - ▁GEPPET - ▁PHARMACI - MULGUK - PPAK - SAMJANG - ▁ACORN - ▁APPETITE - ▁BRUNCH - ▁BUMMER - ▁DIARRHEA - ▁FLAP - ▁GERMS - ▁GWANSUN - ▁HOMETOWN - ▁KILOMETERS - ▁MARRIAGE - ▁PRANKS - ▁RADISH - '5' - ′ - 수 - '2' - ́ - 子 - 예 - 요 - '3' - É - '6' - '9' - “ - . - '1' - 단 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/ko_token_list/bpe_unigram5000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_ko_bpe5000_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: contextual_block_conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true activation_type: swish macaron_style: true use_cnn_module: true cnn_module_kernel: 15 block_size: 40 hop_size: 16 look_ahead: 16 init_average: true ctx_pos_enc: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202304' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Waterhorse/chessgpt-chat-v1
Waterhorse
2023-07-06T06:20:40Z
124
10
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:Waterhorse/chess_data", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:OpenAssistant/oasst1", "dataset:vicgalle/alpaca-gpt4", "arxiv:2306.09200", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-03T21:18:08Z
--- license: apache-2.0 language: - en datasets: - Waterhorse/chess_data - anon8231489123/ShareGPT_Vicuna_unfiltered - OpenAssistant/oasst1 - vicgalle/alpaca-gpt4 --- # Chessgpt-Chat-v1 Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1. - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1) - Chat Version: [Chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1) Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact [email protected]. ## Model Details - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model in Chess. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-chat-v1") model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-chat-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer # Conversation between two prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1:" # Conversation between more than two #prompt = "A friendly, helpful chat between some humans.<|endoftext|>Human 0: 1.e4 c5, what is the name of this opening?<|endoftext|>Human 1: Sicilian defense.<|endoftext|>Human 2:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) ``` # Uses Excluded uses are described below. ### Direct Use `chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling. #### Out-of-Scope Use `chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain. #### Bias, Risks, and Limitations Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases. # Evaluation Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results. # Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
Waterhorse/chessgpt-base-v1
Waterhorse
2023-07-06T06:19:40Z
83
6
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "en", "dataset:Waterhorse/chess_data", "arxiv:2306.09200", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-02T22:03:14Z
--- license: apache-2.0 language: - en datasets: - Waterhorse/chess_data --- # Chessgpt-Base-3B-v1 Chessgpt-Base-v1 is the base model of Chessgpt. - Base Model: [Chessgpt-base-v1](https://huggingface.co/Waterhorse/chessgpt-base-v1) - Chat Version: [chessgpt-chat-v1](https://huggingface.co/Waterhorse/chessgpt-chat-v1) Also, we are actively working on the development of the next-generation model, ChessGPT-V2. We welcome any contribution, especially on chess related dataset. For related matters, please contact [email protected]. ## Model Details - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model in Chess. ## GPU Inference This requires a GPU with 8GB memory. ```python import torch import transformers from transformers import AutoTokenizer, AutoModelForCausalLM MIN_TRANSFORMERS_VERSION = '4.25.1' # check transformers version assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.' # init tokenizer = AutoTokenizer.from_pretrained("Waterhorse/chessgpt-base-v1") model = AutoModelForCausalLM.from_pretrained("Waterhorse/chessgpt-base-v1", torch_dtype=torch.float16) model = model.to('cuda:0') # infer # Conversation between two prompt = "Q: 1.e4 c5, what is the name of this opening?A:" inputs = tokenizer(prompt, return_tensors='pt').to(model.device) input_length = inputs.input_ids.shape[1] outputs = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True, ) token = outputs.sequences[0, input_length:] output_str = tokenizer.decode(token) print(output_str) ``` # Uses Excluded uses are described below. ### Direct Use `chessgpt-base-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling. #### Out-of-Scope Use `chessgpt-base-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain. #### Bias, Risks, and Limitations Just as with any language model, chessgpt-base-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases. # Evaluation Please refer to our [paper](https://arxiv.org/abs/2306.09200) and [code](https://github.com/waterhorse1/ChessGPT)for benchmark results. # Citation Information ```bash @article{feng2023chessgpt, title={ChessGPT: Bridging Policy Learning and Language Modeling}, author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun}, journal={arXiv preprint arXiv:2306.09200}, year={2023} } ```
sukritiverma/thumbs-up-tom_cruise
sukritiverma
2023-07-06T06:14:17Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-05T23:31:34Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - sukritiverma/thumbs-up-tom_cruise These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
nolanaatama/3drndrngstyl
nolanaatama
2023-07-06T05:37:10Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T05:19:33Z
--- license: creativeml-openrail-m ---
eigenscribe/etzHayim
eigenscribe
2023-07-06T05:34:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T05:33:49Z
--- license: creativeml-openrail-m ---
insub/distilbert-base-uncased-finetuned-imdb
insub
2023-07-06T05:22:05Z
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-06T05:17:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-fra-simcse_random
aroot
2023-07-06T05:13:07Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-06T04:53:15Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_random This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1475 - Bleu: 31.8135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
tuanio/WhisperCTC
tuanio
2023-07-06T05:06:09Z
0
1
null
[ "summarization", "dataset:mozilla-foundation/common_voice_13_0", "arxiv:1910.09700", "region:us" ]
summarization
2023-07-06T04:55:16Z
--- datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer pipeline_tag: summarization --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ```python class WhisperCTC(nn.Module): def __init__( self, encoder_id: str = "tuanio/whisper-encoder.tiny.en", dropout: float = 0.1, vocab_size: int = 47, ): super().__init__() self.encoder = WhisperEncoder.from_pretrained(encoder_id) print("Freezing Whisper Encoder...") self.encoder._freeze_parameters() print("Freezed!") self.lm_head = nn.Sequential( nn.SiLU(), nn.Dropout(dropout), nn.Linear(self.encoder.config.d_model, vocab_size), ) nn.init.kaiming_uniform_( self.lm_head[-1].weight, mode="fan_in", nonlinearity="relu" ) def forward(self, feat: Tensor, attn_mask: Tensor): enc = self.encoder( input_features=feat, attention_mask=attn_mask ).last_hidden_state logits = self.lm_head(enc) log_probs = nn.functional.log_softmax(logits, dim=-1) return log_probs ``` - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data - IndictTTS: https://www.kaggle.com/datasets/tuannguyenvananh/indictts-english [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters ```yaml data_cfg: dataset: processor: feat_extractor_id: ${model_cfg.model.encoder_id} tokenizer_id: ${model_cfg.tokenizer_id} path: base: indict_tts: ../IndicTTS cv: ../ train: - train_data/indict_tts_train.jsonl # - train_data/cv_train.jsonl test: - train_data/indict_tts_test.jsonl # - train_data/cv_test.jsonl dev: - train_data/indict_tts_dev.jsonl # - train_data/cv_dev.jsonl dataloader: batch_size: 46 num_workers: 8 pin_memory: True model_cfg: tokenizer_id: tuanio/wav2vec2-phoneme-ipa-ctc model: dropout: 0.1 encoder_id: tuanio/whisper-encoder.medium.en optim: lr: 1.25e-05 betas: [0.9, 0.998] weight_decay: 0.01 scheduler: name: linear total_steps: -1 warmup_ratio: 0.05 interval: step frequency: 1 trainer_cfg: log: wandb: True logger_wandb: project: aped_indian-lish name: whisper-medium-indict-tts-only-from-epoch1 log_model: all arguments: accelerator: gpu devices: -1 max_epochs: 10 log_every_n_steps: 1 enable_checkpointing: True accumulate_grad_batches: 2 inference_mode: True gradient_clip_val: 5.0 check_val_every_n_epoch: 1 val_check_interval: null experiment_cfg: train: True valid: True test: True ckpt: resume_ckpt: True ckpt_path: ckpt/medium.epoch3.ckpt ``` #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
whiteDandelion/swin-tiny-patch4-window7-224-finetuned-eurosat
whiteDandelion
2023-07-06T05:01:12Z
228
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T04:12:49Z
--- tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9805 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [andupets/real-estate-image-classification](https://huggingface.co/andupets/real-estate-image-classification) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0613 - Accuracy: 0.9805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.089 | 0.99 | 140 | 0.1050 | 0.9635 | | 0.0565 | 2.0 | 281 | 0.0760 | 0.9725 | | 0.0421 | 2.98 | 420 | 0.0613 | 0.9805 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KPrashanth/Reinforce_Agent_playing_Cartpole_v1
KPrashanth
2023-07-06T04:36:55Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-06T04:36:41Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce_Agent_playing_Cartpole_v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
headflame02/AchaxV4
headflame02
2023-07-06T04:30:16Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-06T04:29:24Z
--- license: creativeml-openrail-m ---
NasimB/gpt2-concat-cbt-rarity-2k-p3k
NasimB
2023-07-06T04:28:43Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T02:13:04Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-cbt-rarity-2k-p3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-cbt-rarity-2k-p3k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.0083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7186 | 0.29 | 500 | 5.6281 | | 5.3685 | 0.58 | 1000 | 5.1947 | | 5.0278 | 0.87 | 1500 | 4.9465 | | 4.7459 | 1.17 | 2000 | 4.8014 | | 4.5838 | 1.46 | 2500 | 4.6757 | | 4.4777 | 1.75 | 3000 | 4.5664 | | 4.3633 | 2.04 | 3500 | 4.4935 | | 4.1601 | 2.33 | 4000 | 4.4512 | | 4.1388 | 2.62 | 4500 | 4.3967 | | 4.1004 | 2.91 | 5000 | 4.3434 | | 3.9085 | 3.21 | 5500 | 4.3385 | | 3.8559 | 3.5 | 6000 | 4.3100 | | 3.8409 | 3.79 | 6500 | 4.2772 | | 3.7507 | 4.08 | 7000 | 4.2758 | | 3.5677 | 4.37 | 7500 | 4.2717 | | 3.5771 | 4.66 | 8000 | 4.2566 | | 3.5653 | 4.95 | 8500 | 4.2354 | | 3.3565 | 5.24 | 9000 | 4.2632 | | 3.3184 | 5.54 | 9500 | 4.2598 | | 3.3222 | 5.83 | 10000 | 4.2510 | | 3.2596 | 6.12 | 10500 | 4.2621 | | 3.1718 | 6.41 | 11000 | 4.2643 | | 3.1656 | 6.7 | 11500 | 4.2647 | | 3.1666 | 6.99 | 12000 | 4.2645 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
omnitron/PPO-Huggy
omnitron
2023-07-06T04:23:24Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-06T04:22:59Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: omnitron/PPO-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ocisd4/openllama-zh-7B
ocisd4
2023-07-06T04:13:52Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-06T03:46:10Z
```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM import transformers tokenizer = LlamaTokenizer.from_pretrained( 'ocisd4/openllama-zh', add_bos_token=False, add_eos_token=False, use_auth_token=True, use_fast=False) model = LlamaForCausalLM.from_pretrained('ocisd4/openllama-zh', device_map='auto',use_auth_token=True) prompt = '關於華碩的傳說' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=256, do_sample=True, top_k=40, top_p=0.95, temperature=0.7, repetition_penalty=1.08, ) print(tokenizer.decode(generation_output[0])) ``` The is a 7B pretrain model, train from openllama pretrain weight, context size=2048 **keep updating new model**
dangvansam/whisper-base-vi
dangvansam
2023-07-06T04:09:35Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "vi", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-05T10:42:24Z
--- language: - vi pipeline_tag: automatic-speech-recognition ---