modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
DanGalt/Reinforce-MLP-Pixelcopter-PLE-v0
DanGalt
2023-01-07T06:57:19Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-07T06:14:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-MLP-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 29.30 +/- 18.72 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mjaydenkim/autotrain-gend-ma-classification-2764081811
mjaydenkim
2023-01-07T06:50:15Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain", "unk", "dataset:mjaydenkim/autotrain-data-gend-ma-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-07T06:49:34Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - mjaydenkim/autotrain-data-gend-ma-classification co2_eq_emissions: emissions: 1.0453005361524643 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2764081811 - CO2 Emissions (in grams): 1.0453 ## Validation Metrics - Loss: 0.418 - Accuracy: 0.839 - Precision: 0.779 - Recall: 0.769 - AUC: 0.884 - F1: 0.774 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mjaydenkim/autotrain-gend-ma-classification-2764081811 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mjaydenkim/autotrain-gend-ma-classification-2764081811", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("mjaydenkim/autotrain-gend-ma-classification-2764081811", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
sd99/ppo-LunarLander-v2
sd99
2023-01-07T06:44:32Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-07T06:44:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 280.48 +/- 16.71 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
shriramkv/new_vit
shriramkv
2023-01-07T06:16:03Z
23
0
transformers
[ "transformers", "pytorch", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-04T05:46:20Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: new_vit results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # new_vit This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.25.1 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
Huyen2310/Vi-gec-wer
Huyen2310
2023-01-07T06:05:56Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "hf-asr-leaderboard", "generated_from_trainer", "vi", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-07T02:01:18Z
--- language: - vi tags: - hf-asr-leaderboard - generated_from_trainer model-index: - name: HuyenNguyen results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HuyenNguyen This model is a fine-tuned version of [Huyen2310/Vi-gec](https://huggingface.co/Huyen2310/Vi-gec) on the FPT dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
etherealxx/grape-grapefruit
etherealxx
2023-01-07T05:57:34Z
19
6
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:openrail", "region:us" ]
text-to-image
2023-01-07T05:55:24Z
--- license: openrail language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- A repost of [this model](https://civitai.com/models/2583/grape-and-grapefruit-hentai-models) by [ikena](https://civitai.com/user/ikena) on CivitAi. Contact me if you are the owner of this model and want to put this model on your huggingface repo instead.
etherealxx/kribomix
etherealxx
2023-01-07T05:36:29Z
10
3
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:openrail", "region:us" ]
text-to-image
2023-01-07T05:03:15Z
--- license: openrail language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- A pruned, safetensor version of [this model](https://civitai.com/models/1225/kribomix-nstal) by [kribo](https://civitai.com/user/kribo) on CivitAi. This version doesn't break the RAM limit on Google Colab. Contact me if you are the owner of this model and want to put this model on your huggingface repo instead.
AhmedMagd/Lunar_Lander
AhmedMagd
2023-01-07T05:33:02Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-12-25T05:07:46Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 284.57 +/- 22.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
research-backup/mbart-large-cc25-esquad-ae
research-backup
2023-01-07T05:22:29Z
106
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "answer extraction", "es", "dataset:lmqg/qg_esquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-01-07T05:20:25Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: es datasets: - lmqg/qg_esquad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "<hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como." example_title: "Answering Extraction Example 1" - text: "<hl> Los estudiosos y los histori a dores están divididos en cuanto a qué evento señala el final de la era helenística. <hl> El período helenístico se puede ver que termina con la conquista final del corazón griego por Roma en 146 a. C. tras la guerra aquea, con la derrota final del reino ptolemaico en la batalla de Actium en 31 a. Helenístico se distingue de helénico en que el primero abarca toda la esfera de influencia griega antigua directa, mientras que el segundo se refiere a la propia Grecia." example_title: "Answering Extraction Example 2" model-index: - name: lmqg/mbart-large-cc25-esquad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_esquad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 22.26 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 46.63 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 42.84 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 87.79 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 78.33 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 71.88 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 53.78 --- # Model Card of `lmqg/mbart-large-cc25-esquad-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for answer extraction on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** es - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="es", model="lmqg/mbart-large-cc25-esquad-ae") # model prediction answers = model.generate_a("a noviembre , que es también la estación lluviosa.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-esquad-ae") output = pipe("<hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como.") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 53.78 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | AnswerF1Score | 71.88 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | BERTScore | 87.79 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 33 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 28.48 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 25.1 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 22.26 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 42.84 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 78.33 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 46.63 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_esquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 13 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-esquad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
yuhuizhang/finetuned_gpt2_sst2_negation0.05
yuhuizhang
2023-01-07T04:44:33Z
41
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:sst2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-07T04:32:33Z
--- license: mit tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_gpt2_sst2_negation0.05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2_sst2_negation0.05 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1134 | 1.0 | 1062 | 3.5060 | | 2.926 | 2.0 | 2124 | 3.5158 | | 2.8331 | 3.0 | 3186 | 3.5271 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.12.1
asapp/e_branchformer_librispeech
asapp
2023-01-07T03:26:03Z
8
2
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:2210.00077", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2023-01-07T02:35:18Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 ASR model ### `asapp/e_branchformer_librispeech` This model was trained by Kwangyoun Kim using librispeech recipe in [espnet](https://github.com/espnet/espnet/). References: - [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077) - [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html) ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 7a203d55543df02f0369d5608cd6f3033119a135 pip install -e . cd egs2/librispeech/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model asapp/e_branchformer_librispeech ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Jan 2 12:59:49 UTC 2023` - python version: `3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0]` - espnet version: `espnet 202211` - pytorch version: `pytorch 1.10.1` - Git hash: `7a203d55543df02f0369d5608cd6f3033119a135` - Commit date: `Fri Dec 23 00:58:49 2022 +0000` ## asr_train_asr_e_branchformer_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|98.2|1.6|0.2|0.2|2.0|26.3| |decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|95.8|3.8|0.3|0.4|4.6|40.6| |decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|98.1|1.8|0.2|0.2|2.2|26.6| |decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|95.9|3.7|0.4|0.5|4.6|42.0| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.6|22.5| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|50948|96.7|3.0|0.3|0.3|3.7|34.7| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|52576|98.4|1.5|0.2|0.2|1.9|23.1| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|52343|96.7|2.9|0.4|0.4|3.7|37.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|26.3| |decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|98.6|0.9|0.5|0.5|1.9|40.6| |decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|99.5|0.2|0.2|0.2|0.7|26.6| |decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|98.7|0.8|0.5|0.5|1.8|42.0| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.5| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|265951|98.7|0.7|0.6|0.4|1.7|34.7| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|281530|99.5|0.2|0.2|0.2|0.6|23.1| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|272758|98.8|0.6|0.6|0.4|1.6|37.1| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev_clean|2703|68010|97.8|1.6|0.6|0.3|2.6|26.3| |decode_asr_asr_model_valid.acc.ave/dev_other|2864|63110|94.9|3.9|1.2|0.8|5.9|40.6| |decode_asr_asr_model_valid.acc.ave/test_clean|2620|65818|97.6|1.7|0.7|0.3|2.7|26.6| |decode_asr_asr_model_valid.acc.ave/test_other|2939|65101|95.0|3.6|1.4|0.7|5.7|42.0| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|68010|98.1|1.3|0.6|0.3|2.1|22.5| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|63110|95.6|3.1|1.3|0.6|5.0|34.7| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|65818|97.8|1.4|0.8|0.3|2.5|23.1| |decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|65101|95.8|2.8|1.5|0.5|4.7|37.1| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_e_branchformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_e_branchformer_raw_en_bpe5000_sp ngpu: 1 seed: 0 num_workers: 8 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 49667 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 140000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960_sp/wav.scp - speech - sound - - dump/raw/train_960_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 40000 token_list: - <blank> - <unk> - ▁THE - S - ▁AND - ▁OF - ▁TO - ▁A - ▁IN - ▁I - ▁HE - ▁THAT - ▁WAS - ED - ▁IT - '''' - ▁HIS - ING - ▁YOU - ▁WITH - ▁FOR - ▁HAD - T - ▁AS - ▁HER - ▁IS - ▁BE - ▁BUT - ▁NOT - ▁SHE - D - ▁AT - ▁ON - LY - ▁HIM - ▁THEY - ▁ALL - ▁HAVE - ▁BY - ▁SO - ▁THIS - ▁MY - ▁WHICH - ▁ME - ▁SAID - ▁FROM - ▁ONE - Y - E - ▁WERE - ▁WE - ▁NO - N - ▁THERE - ▁OR - ER - ▁AN - ▁WHEN - ▁ARE - ▁THEIR - ▁WOULD - ▁IF - ▁WHAT - ▁THEM - ▁WHO - ▁OUT - M - ▁DO - ▁WILL - ▁UP - ▁BEEN - P - R - ▁MAN - ▁THEN - ▁COULD - ▁MORE - C - ▁INTO - ▁NOW - ▁VERY - ▁YOUR - ▁SOME - ▁LITTLE - ES - ▁TIME - RE - ▁CAN - ▁LIKE - LL - ▁ABOUT - ▁HAS - ▁THAN - ▁DID - ▁UPON - ▁OVER - IN - ▁ANY - ▁WELL - ▁ONLY - B - ▁SEE - ▁GOOD - ▁OTHER - ▁TWO - L - ▁KNOW - ▁GO - ▁DOWN - ▁BEFORE - A - AL - ▁OUR - ▁OLD - ▁SHOULD - ▁MADE - ▁AFTER - ▁GREAT - ▁DAY - ▁MUST - ▁COME - ▁HOW - ▁SUCH - ▁CAME - LE - ▁WHERE - ▁US - ▁NEVER - ▁THESE - ▁MUCH - ▁DE - ▁MISTER - ▁WAY - G - ▁S - ▁MAY - ATION - ▁LONG - OR - ▁AM - ▁FIRST - ▁BACK - ▁OWN - ▁RE - ▁AGAIN - ▁SAY - ▁MEN - ▁WENT - ▁HIMSELF - ▁HERE - NESS - ▁THINK - V - IC - ▁EVEN - ▁THOUGHT - ▁HAND - ▁JUST - ▁O - ▁UN - VE - ION - ▁ITS - 'ON' - ▁MAKE - ▁MIGHT - ▁TOO - K - ▁AWAY - ▁LIFE - TH - ▁WITHOUT - ST - ▁THROUGH - ▁MOST - ▁TAKE - ▁DON - ▁EVERY - F - O - ▁SHALL - ▁THOSE - ▁EYES - AR - ▁STILL - ▁LAST - ▁HOUSE - ▁HEAD - ABLE - ▁NOTHING - ▁NIGHT - ITY - ▁LET - ▁MANY - ▁OFF - ▁BEING - ▁FOUND - ▁WHILE - EN - ▁SAW - ▁GET - ▁PEOPLE - ▁FACE - ▁YOUNG - CH - ▁UNDER - ▁ONCE - ▁TELL - AN - ▁THREE - ▁PLACE - ▁ROOM - ▁YET - ▁SAME - IL - US - U - ▁FATHER - ▁RIGHT - EL - ▁THOUGH - ▁ANOTHER - LI - RI - ▁HEART - IT - ▁PUT - ▁TOOK - ▁GIVE - ▁EVER - ▁E - ▁PART - ▁WORK - ERS - ▁LOOK - ▁NEW - ▁KING - ▁MISSUS - ▁SIR - ▁LOVE - ▁MIND - ▁LOOKED - W - RY - ▁ASKED - ▁LEFT - ET - ▁LIGHT - CK - ▁DOOR - ▁MOMENT - RO - ▁WORLD - ▁THINGS - ▁HOME - UL - ▁THING - LA - ▁WHY - ▁MOTHER - ▁ALWAYS - ▁FAR - FUL - ▁WATER - CE - IVE - UR - ▁HEARD - ▁SOMETHING - ▁SEEMED - I - LO - ▁BECAUSE - OL - ▁END - ▁TOLD - ▁CON - ▁YES - ▁GOING - ▁GOT - RA - IR - ▁WOMAN - ▁GOD - EST - TED - ▁FIND - ▁KNEW - ▁SOON - ▁EACH - ▁SIDE - H - TON - MENT - ▁OH - NE - Z - LING - ▁AGAINST - TER - ▁NAME - ▁MISS - ▁QUITE - ▁WANT - ▁YEARS - ▁FEW - ▁BETTER - ENT - ▁HALF - ▁DONE - ▁ALSO - ▁BEGAN - ▁HAVING - ▁ENOUGH - IS - ▁LADY - ▁WHOLE - LESS - ▁BOTH - ▁SEEN - ▁SET - ▁WHITE - ▁COURSE - IES - ▁VOICE - ▁CALLED - ▁D - ▁EX - ATE - ▁TURNED - ▁GAVE - ▁C - ▁POOR - MAN - UT - NA - ▁DEAR - ISH - ▁GIRL - ▁MORNING - ▁BETWEEN - LED - ▁NOR - IA - ▁AMONG - MA - ▁ - ▁SMALL - ▁REST - ▁WHOM - ▁FELT - ▁HANDS - ▁MYSELF - ▁HIGH - ▁M - ▁HOWEVER - ▁HERSELF - ▁P - CO - ▁STOOD - ID - ▁KIND - ▁HUNDRED - AS - ▁ROUND - ▁ALMOST - TY - ▁SINCE - ▁G - AM - ▁LA - SE - ▁BOY - ▁MA - ▁PERHAPS - ▁WORDS - ATED - ▁HO - X - ▁MO - ▁SAT - ▁REPLIED - ▁FOUR - ▁ANYTHING - ▁TILL - ▁UNTIL - ▁BLACK - TION - ▁CRIED - RU - TE - ▁FACT - ▁HELP - ▁NEXT - ▁LOOKING - ▁DOES - ▁FRIEND - ▁LAY - ANCE - ▁POWER - ▁BROUGHT - VER - ▁FIRE - ▁KEEP - PO - FF - ▁COUNTRY - ▁SEA - ▁WORD - ▁CAR - ▁DAYS - ▁TOGETHER - ▁IMP - ▁REASON - KE - ▁INDEED - TING - ▁MATTER - ▁FULL - ▁TEN - TIC - ▁LAND - ▁RATHER - ▁AIR - ▁HOPE - ▁DA - ▁OPEN - ▁FEET - ▁EN - ▁FIVE - ▁POINT - ▁CO - OM - ▁LARGE - ▁B - ▁CL - ME - ▁GONE - ▁CHILD - INE - GG - ▁BEST - ▁DIS - UM - ▁HARD - ▁LORD - OUS - ▁WIFE - ▁SURE - ▁FORM - DE - ▁DEATH - ANT - ▁NATURE - ▁BA - ▁CARE - ▁BELIEVE - PP - ▁NEAR - ▁RO - ▁RED - ▁WAR - IE - ▁SPEAK - ▁FEAR - ▁CASE - ▁TAKEN - ▁ALONG - ▁CANNOT - ▁HEAR - ▁THEMSELVES - CI - ▁PRESENT - AD - ▁MASTER - ▁SON - ▁THUS - ▁LI - ▁LESS - ▁SUN - ▁TRUE - IM - IOUS - ▁THOUSAND - ▁MONEY - ▁W - ▁BEHIND - ▁CHILDREN - ▁DOCTOR - AC - ▁TWENTY - ▁WISH - ▁SOUND - ▁WHOSE - ▁LEAVE - ▁ANSWERED - ▁THOU - ▁DUR - ▁HA - ▁CERTAIN - ▁PO - ▁PASSED - GE - TO - ▁ARM - ▁LO - ▁STATE - ▁ALONE - TA - ▁SHOW - ▁NEED - ▁LIVE - ND - ▁DEAD - ENCE - ▁STRONG - ▁PRE - ▁TI - ▁GROUND - SH - TI - ▁SHORT - IAN - UN - ▁PRO - ▁HORSE - MI - ▁PRINCE - ARD - ▁FELL - ▁ORDER - ▁CALL - AT - ▁GIVEN - ▁DARK - ▁THEREFORE - ▁CLOSE - ▁BODY - ▁OTHERS - ▁SENT - ▁SECOND - ▁OFTEN - ▁CA - ▁MANNER - MO - NI - ▁BRING - ▁QUESTION - ▁HOUR - ▁BO - AGE - ▁ST - ▁TURN - ▁TABLE - ▁GENERAL - ▁EARTH - ▁BED - ▁REALLY - ▁SIX - 'NO' - IST - ▁BECOME - ▁USE - ▁READ - ▁SE - ▁VI - ▁COMING - ▁EVERYTHING - ▁EM - ▁ABOVE - ▁EVENING - ▁BEAUTIFUL - ▁FEEL - ▁RAN - ▁LEAST - ▁LAW - ▁ALREADY - ▁MEAN - ▁ROSE - WARD - ▁ITSELF - ▁SOUL - ▁SUDDENLY - ▁AROUND - RED - ▁ANSWER - ICAL - ▁RA - ▁WIND - ▁FINE - ▁WON - ▁WHETHER - ▁KNOWN - BER - NG - ▁TA - ▁CAPTAIN - ▁EYE - ▁PERSON - ▁WOMEN - ▁SORT - ▁ASK - ▁BROTHER - ▁USED - ▁HELD - ▁BIG - ▁RETURNED - ▁STRANGE - ▁BU - ▁PER - ▁FREE - ▁EITHER - ▁WITHIN - ▁DOUBT - ▁YEAR - ▁CLEAR - ▁SIGHT - ▁GRA - ▁LOST - ▁KEPT - ▁F - PE - ▁BAR - ▁TOWN - ▁SLEEP - ARY - ▁HAIR - ▁FRIENDS - ▁DREAM - ▁FELLOW - PER - ▁DEEP - QUE - ▁BECAME - ▁REAL - ▁PAST - ▁MAKING - RING - ▁COMP - ▁ACT - ▁BAD - HO - STER - ▁YE - ▁MEANS - ▁RUN - MEN - ▁DAUGHTER - ▁SENSE - ▁CITY - ▁SOMETIMES - ▁TOWARDS - ▁ROAD - ▁SP - ▁LU - ▁READY - ▁FOOT - ▁COLD - ▁SA - ▁LETTER - ▁ELSE - ▁MAR - ▁STA - BE - ▁TRUTH - ▁LE - BO - ▁BUSINESS - CHE - ▁JOHN - ▁SUBJECT - ▁COURT - ▁IDEA - ILY - ▁RIVER - ATING - ▁FAMILY - HE - ▁DIDN - ▁GLAD - ▁SEVERAL - IAL - ▁UNDERSTAND - ▁SC - ▁POSSIBLE - ▁DIFFERENT - ▁RETURN - ▁ARMS - ▁LOW - ▁HOLD - ▁TALK - ▁RU - ▁WINDOW - ▁INTEREST - ▁SISTER - SON - ▁SH - ▁BLOOD - ▁SAYS - ▁CAP - ▁DI - ▁HUMAN - ▁CAUSE - NCE - ▁THANK - ▁LATE - GO - ▁CUT - ▁ACROSS - ▁STORY - NT - ▁COUNT - ▁ABLE - DY - LEY - ▁NUMBER - ▁STAND - ▁CHURCH - ▁THY - ▁SUPPOSE - LES - BLE - OP - ▁EFFECT - BY - ▁K - ▁NA - ▁SPOKE - ▁MET - ▁GREEN - ▁HUSBAND - ▁RESPECT - ▁PA - ▁FOLLOWED - ▁REMEMBER - ▁LONGER - ▁AGE - ▁TAKING - ▁LINE - ▁SEEM - ▁HAPPY - LAND - EM - ▁STAY - ▁PLAY - ▁COMMON - ▁GA - ▁BOOK - ▁TIMES - ▁OBJECT - ▁SEVEN - QUI - DO - UND - ▁FL - ▁PRETTY - ▁FAIR - WAY - ▁WOOD - ▁REACHED - ▁APPEARED - ▁SWEET - ▁FALL - BA - ▁PASS - ▁SIGN - ▁TREE - IONS - ▁GARDEN - ▁ILL - ▁ART - ▁REMAIN - ▁OPENED - ▁BRIGHT - ▁STREET - ▁TROUBLE - ▁PAIN - ▁CONTINUED - ▁SCHOOL - OUR - ▁CARRIED - ▁SAYING - HA - ▁CHANGE - ▁FOLLOW - ▁GOLD - ▁SW - ▁FEELING - ▁COMMAND - ▁BEAR - ▁CERTAINLY - ▁BLUE - ▁NE - CA - ▁WILD - ▁ACCOUNT - ▁OUGHT - UD - ▁T - ▁BREATH - ▁WANTED - ▁RI - ▁HEAVEN - ▁PURPOSE - ▁CHARACTER - ▁RICH - ▁PE - ▁DRESS - OS - FA - ▁TH - ▁ENGLISH - ▁CHANCE - ▁SHIP - ▁VIEW - ▁TOWARD - AK - ▁JOY - ▁JA - ▁HAR - ▁NEITHER - ▁FORCE - ▁UNCLE - DER - ▁PLAN - ▁PRINCESS - DI - ▁CHIEF - ▁HAT - ▁LIVED - ▁AB - ▁VISIT - ▁MOR - TEN - ▁WALL - UC - ▁MINE - ▁PLEASURE - ▁SMILE - ▁FRONT - ▁HU - ▁DEAL - OW - ▁FURTHER - GED - ▁TRIED - DA - VA - ▁NONE - ▁ENTERED - ▁QUEEN - ▁PAY - ▁EL - ▁EXCEPT - ▁SHA - ▁FORWARD - ▁EIGHT - ▁ADDED - ▁PUBLIC - ▁EIGHTEEN - ▁STAR - ▁HAPPENED - ▁LED - ▁WALKED - ▁ALTHOUGH - ▁LATER - ▁SPIRIT - ▁WALK - ▁BIT - ▁MEET - LIN - ▁FI - LT - ▁MOUTH - ▁WAIT - ▁HOURS - ▁LIVING - ▁YOURSELF - ▁FAST - ▁CHA - ▁HALL - ▁BEYOND - ▁BOAT - ▁SECRET - ENS - ▁CHAIR - RN - ▁RECEIVED - ▁CAT - RESS - ▁DESIRE - ▁GENTLEMAN - UGH - ▁LAID - EVER - ▁OCCASION - ▁WONDER - ▁GU - ▁PARTY - DEN - ▁FISH - ▁SEND - ▁NEARLY - ▁TRY - CON - ▁SEEMS - RS - ▁BELL - ▁BRA - ▁SILENCE - IG - ▁GUARD - ▁DIE - ▁DOING - ▁TU - ▁COR - ▁EARLY - ▁BANK - ▁FIGURE - IF - ▁ENGLAND - ▁MARY - ▁AFRAID - LER - ▁FO - ▁WATCH - ▁FA - ▁VA - ▁GRE - ▁AUNT - PED - ▁SERVICE - ▁JE - ▁PEN - ▁MINUTES - ▁PAN - ▁TREES - NED - ▁GLASS - ▁TONE - ▁PLEASE - ▁FORTH - ▁CROSS - ▁EXCLAIMED - ▁DREW - ▁EAT - ▁AH - ▁GRAVE - ▁CUR - PA - URE - CENT - ▁MILES - ▁SOFT - ▁AGO - ▁POSITION - ▁WARM - ▁LENGTH - ▁NECESSARY - ▁THINKING - ▁PICTURE - ▁PI - SHIP - IBLE - ▁HEAVY - ▁ATTENTION - ▁DOG - ABLY - ▁STANDING - ▁NATURAL - ▁APPEAR - OV - ▁CAUGHT - VO - ISM - ▁SPRING - ▁EXPERIENCE - ▁PAT - OT - ▁STOPPED - ▁REGARD - ▁HARDLY - ▁SELF - ▁STRENGTH - ▁GREW - ▁KNIGHT - ▁OPINION - ▁WIDE - ▁INSTEAD - ▁SOUTH - ▁TRANS - ▁CORNER - ▁LEARN - ▁ISLAND - ▁MI - ▁THIRD - ▁STE - ▁STRAIGHT - ▁TEA - ▁BOUND - ▁SEEING - ▁JU - ▁DINNER - ▁BEAUTY - ▁PEACE - AH - ▁REP - ▁SILENT - ▁CRE - ALLY - RIC - ▁STEP - ▁VER - ▁JO - GER - ▁SITTING - ▁THIRTY - ▁SAVE - ENED - ▁GLANCE - ▁REACH - ▁ACTION - ▁SAL - ▁SAD - ▁STONE - ITIES - ▁FRENCH - ▁STRUCK - ▁PAPER - ▁WHATEVER - ▁SUB - ▁DISTANCE - ▁WRONG - ▁KNOWLEDGE - ▁SAFE - ▁SNOW - ▁MUSIC - ▁FIFTY - RON - ▁ATTEMPT - ▁GOVERNMENT - TU - ▁CROWD - ▁BESIDES - ▁LOVED - ▁BOX - ▁DIRECTION - ▁TRAIN - ▁NORTH - ▁THICK - ▁GETTING - AV - ▁FLOOR - ▁COMPANY - ▁BLOW - ▁PLAIN - TRO - ▁BESIDE - ▁ROCK - ▁IMMEDIATELY - FI - ▁SHADOW - ▁SIT - ORS - ILE - ▁DRINK - ▁SPOT - ▁DANGER - ▁AL - ▁SAINT - ▁SLOWLY - ▁PALACE - IER - ▁RESULT - ▁PETER - ▁FOREST - ▁BELONG - ▁SU - ▁PAR - RIS - ▁TEARS - ▁APPEARANCE - ▁GATE - BU - ITION - ▁QUICKLY - ▁QUIET - ▁LONDON - ▁START - ▁BROWN - TRA - KIN - ▁CONSIDER - ▁BATTLE - ▁ANNE - ▁PIECE - ▁DIED - ▁SUCCESS - ▁LIPS - ▁FILLED - ▁FORGET - ▁POST - IFIED - ▁MARGARET - ▁FOOD - HAM - ▁PLEASANT - ▁FE - ▁EXPRESSION - ▁POCKET - ▁FRESH - ▁WEAR - TRI - ▁BROKEN - ▁LAUGHED - GING - ▁FOLLOWING - WN - IP - ▁TOUCH - ▁YOUTH - ATIVE - ▁LEG - ▁WEEK - ▁REMAINED - ▁EASY - NER - RK - ▁ENTER - ▁FIGHT - ▁PLACED - ▁TRAVEL - ▁SIMPLE - ▁GIRLS - ▁WAITING - ▁STOP - ▁WAVE - AU - ▁WISE - ▁CAMP - TURE - UB - ▁VE - ▁OFFICE - ▁GRAND - ▁FIT - ▁JUDGE - UP - MENTS - ▁QUICK - HI - ▁FLO - RIES - VAL - ▁COMFORT - ▁PARTICULAR - ▁STARTED - ▁SUIT - ▁NI - ▁PALE - ▁IMPOSSIBLE - ▁HOT - ▁CONVERSATION - ▁SCENE - ▁BOYS - ▁WIN - ▁BRE - ▁SOCIETY - ▁OUTSIDE - ▁WRITE - ▁EFFORT - ▁TALKING - ▁FORTUNE - ▁NINE - ▁WA - ▁SINGLE - ▁RULE - ▁PORT - ▁WINTER - ▁CAST - ▁CRA - ▁HAPPEN - ▁CRO - ▁SHUT - NING - ▁GUN - ▁NOBLE - ▁BEGIN - ▁PATH - ▁SKY - ▁WONDERFUL - ▁SUDDEN - ▁ARMY - ▁CHE - ▁WORTH - ▁MOUNTAIN - ▁MIN - AG - ▁FLU - ▁GRACE - ▁CHAPTER - ▁BELOW - ▁RING - ▁TURNING - ▁IRON - ▁TOP - ▁AFTERNOON - ORY - ▁EVIL - ▁TRUST - ▁BOW - ▁TRI - ▁SAIL - ▁CONTENT - ▁HORSES - ITE - ▁SILVER - AP - ▁LAD - ▁RUNNING - ▁HILL - ▁BEGINNING - ▁MAD - ▁HABIT - GRA - ▁CLOTHES - ▁MORROW - ▁CRY - ▁FASHION - ▁PRESENCE - ▁Z - FE - ▁ARRIVED - ▁QUARTER - ▁PERFECT - ▁WO - ▁TRA - ▁USUAL - ▁NECK - ▁MARRIED - ▁SEAT - ▁WI - ▁GAR - ▁SAND - ▁SHORE - ▁GIVING - NY - ▁PROBABLY - ▁MINUTE - ▁EXPECT - ▁DU - ▁SHOT - ▁INSTANT - ▁DEGREE - ▁COLOR - ▁WEST - RT - ▁MARCH - ▁BIRD - ▁SHOWED - ▁GREATER - ▁SERIOUS - ▁CARRY - ▁COVERED - ▁FORMER - ▁LOUD - ▁MOVED - ▁MASS - ▁SEEK - ▁CHO - GEN - ▁ROMAN - IB - ▁MOON - ▁BOARD - ▁STREAM - ▁EASILY - ▁WISHED - ▁SEARCH - ▁COULDN - ▁MONTHS - ▁SICK - LIE - ▁DUTY - ▁TWELVE - ▁FAINT - ▁STRANGER - ▁SURPRISE - ▁KILL - ▁LEAVING - ▁JOURNEY - ▁SCARCELY - ▁RAISED - ▁SPEAKING - ▁TERRIBLE - ▁TOM - ▁FIELD - ▁GAME - ▁QUA - ▁PROMISE - ▁LIE - ▁CONDITION - ▁TRO - ▁PERSONAL - ▁TALL - ▁STICK - ▁THREW - ▁MARRY - ▁VAN - ▁BURN - ▁ACCORDING - ▁RISE - ▁ATTACK - ▁SWORD - ▁GUESS - ▁THOUGHTS - ▁THIN - ▁THROW - ▁CALM - SIDE - ▁VILLAGE - ▁DEN - ▁ANXIOUS - ▁MER - GI - ▁EXPECTED - ▁BALL - ▁ESPECIALLY - ▁CHARGE - ▁MEASURE - ISE - ▁NICE - ▁TRYING - ▁ALLOW - ▁SHARP - ▁BREAD - ▁HONOUR - ▁HONOR - ▁ENTIRELY - ▁BILL - ▁BRI - ▁WRITTEN - ▁AR - ▁BROKE - ▁KILLED - ▁MARK - ▁VEN - ▁LADIES - ▁LEARNED - ▁FLOWERS - PLE - ▁FORTY - ▁OFFER - ▁HAPPINESS - ▁PRAY - ▁CLASS - ▁FER - ▁PRINCIPLE - GU - ▁BOOKS - ▁SHAPE - ▁SUMMER - ▁JACK - ▁DRAW - ▁GOLDEN - ▁DECIDED - ▁LEAD - ▁UNLESS - ▁HARM - ▁LISTEN - HER - ▁SHOOK - ▁INFLUENCE - ▁PERFECTLY - ▁MARRIAGE - ▁BROAD - ▁ESCAPE - ▁STATES - ▁MIDDLE - ▁PLANT - ▁MIL - ▁MOVEMENT - ▁NOISE - ▁ENEMY - ▁HISTORY - ▁BREAK - ROUS - ▁UNDERSTOOD - ▁LATTER - FER - ▁COMES - ▁MERELY - ▁SIMPLY - WI - ▁IMAGINE - ▁LOWER - ▁CONDUCT - ▁BORN - WA - ▁YARD - ▁KA - ▁CLOSED - ▁NOTE - GA - ▁STRA - RAN - ▁EXIST - EV - ▁SPEECH - ▁BITTER - JO - ▁MAKES - ▁GRASS - ▁REPLY - ▁CHANGED - ▁MON - ▁LYING - ▁DANCE - ▁FINALLY - ▁AMERICAN - ▁ENJOY - ▁CONTAIN - ▁MEANT - USE - ▁OBSERVED - THER - ▁LAUGH - ▁AFTERWARDS - ▁BEAT - ▁RACE - ▁EQUAL - ▁RAIN - PS - ▁STEPS - ▁BENEATH - ▁TAIL - ▁TASTE - IO - EY - ▁CHAR - ▁GE - GN - TIN - ▁GROW - ▁TE - IANS - ▁MOVE - ▁REPEATED - ▁DRIVE - TUR - ▁SI - CLOCK - ▁BRAVE - ▁MADAME - ▁LOT - ▁CASTLE - ▁HI - AND - ▁FUTURE - ▁RELATION - ▁SORRY - ▁HEALTH - ▁DICK - ▁R - ▁BUILDING - ▁EDGE - ▁BLESS - ▁SPITE - WE - ▁MIS - ▁PRISONER - ▁ALLOWED - ▁PH - ▁CATCH - MER - ETH - ▁COAT - ▁COMPLETE - ▁WOULDN - ▁CREATURE - ▁YELLOW - ▁IMPORTANT - ▁ADD - ▁PASSING - ▁DARKNESS - ▁CARRIAGE - ▁MILL - ▁FIFTEEN - NCY - ▁HUNG - ▁OB - ▁PLEASED - ▁SPREAD - ▁CURIOUS - ▁WORSE - ▁CIRCUMSTANCES - ▁GI - LAR - ▁CAL - ▁HY - ▁MERE - ▁JANE - ▁EAST - BI - ▁CUP - ▁BLIND - ▁PASSION - ▁DISCOVERED - ▁NOTICE - ▁REPORT - ▁SPACE - ▁PRESENTLY - ▁SORROW - ▁PACK - ▁DIN - CY - ▁DRY - ▁ANCIENT - ▁DRESSED - ▁COVER - ▁VO - ▁EXISTENCE - ▁EXACTLY - ▁BEAST - ▁PROPER - ▁DROPPED - ▁CLEAN - ▁COLOUR - ▁HOST - ▁CHAMBER - ▁FAITH - LET - ▁DETERMINED - ▁PRIEST - ▁STORM - ▁SKIN - ▁DARE - ▁PERSONS - ▁PICK - ▁NARROW - ▁SUPPORT - ▁PRIVATE - ▁SMILED - ▁COUSIN - ▁DRAWING - ▁ATTEND - ▁COOK - ▁PREVENT - ▁VARIOUS - ▁BLA - ▁FIXED - ▁WEAK - THE - ▁HOLE - ▁BOTTOM - ▁NOBODY - ADE - ▁LEGS - ITCH - ▁INDIVIDUAL - ▁EARS - LIKE - ▁ADVANTAGE - ▁FRANCE - ▁BON - ▁WINE - ▁LIVES - OD - ▁WALLS - ▁TIRED - ▁SHOP - ▁ANIMAL - ▁CRU - ▁WROTE - ▁ROYAL - ▁CONSIDERED - ▁MORAL - ▁COMPANION - ▁LOSE - ▁ISN - ▁BAG - ▁LAKE - ▁INTER - ▁COM - ▁LETTERS - ▁LUCK - ▁EAR - ▁GERMAN - ▁PET - ▁SAKE - ▁DROP - ▁PAID - ▁BREAKFAST - ▁LABOR - ▁DESERT - ▁DECLARED - ▁HUM - ▁STUDY - ▁INSTANCE - ONE - ▁SOMEWHAT - ▁CLOTH - ▁SPECIAL - ▁COLONEL - ▁SONG - ▁MAIN - ▁VALUE - ▁PROUD - ▁EXPRESS - ▁NATION - ▁HANDSOME - ▁CONFESS - ▁PU - ▁PASSAGE - ▁PERIOD - ▁CUSTOM - ▁HURT - ▁SHOULDER - ▁CHRIST - ZA - ▁RECEIVE - ▁DIFFICULT - ▁DEPEND - ▁MEETING - ▁CHI - ▁GEN - LIGHT - ▁BELIEVED - ▁SOCIAL - ▁DIFFICULTY - ▁GREATEST - ▁DRAWN - ▁GRANT - ▁BIRDS - ▁ANGRY - ▁HEAT - UFF - ▁DUE - ▁PLACES - ▁SIN - ▁COURAGE - ▁EVIDENTLY - ▁GENTLE - ▁CRUEL - ▁GEORGE - ▁GRI - ▁SERVANT - ▁U - ▁PURE - OOK - ▁KNOWS - ▁KNOWING - LF - ▁WRITING - ▁REMEMBERED - ▁CU - ▁HOLDING - ▁TENDER - ▁QUI - ▁BURST - ▁SURELY - IGN - ▁VALLEY - ▁FU - ▁BUTTER - ▁SPOKEN - ▁STORE - ▁DISC - ▁CHRISTIAN - ▁PARIS - ▁HENRY - ▁FINISHED - ▁PROVE - ▁FOOL - ▁SOLDIERS - ▁LANGUAGE - ▁INSIDE - ▁BAN - ▁FALLEN - ROW - ▁MAL - ▁BABY - ▁SITUATION - ▁WATCHED - ANS - ▁RUIN - ▁GENTLEMEN - ▁FRO - ▁FANCY - ▁ACCEPT - ▁SEASON - ▁OURSELVES - ▁SAN - ▁SPEED - IZED - ▁COOL - ▁SERVE - ▁VESSEL - ▁WILLIAM - ▁OBLIGED - ▁GROUP - FORM - ▁GOES - UOUS - ▁LEAVES - ▁PECULIAR - ▁NEWS - ▁VAIN - ▁EVERYBODY - ▁PIN - UG - ▁FORGOTTEN - ▁FRA - GAN - ▁CAREFULLY - ▁FLASH - UCH - ▁FUR - ▁MURDER - ▁DELIGHT - ▁WAITED - ▁RENDER - ▁PROPERTY - ▁NOTICED - ▁ROLL - ▁KNOCK - ▁EARNEST - KI - ▁HONEST - ▁PROMISED - ▁BAL - AW - ▁WALKING - ANG - ▁SQUARE - ▁QUIETLY - ▁CLOUD - WOOD - ▁FORMED - ▁HIGHER - ▁BUILT - ▁FATE - ▁TEACH - MY - ▁FALSE - ▁YORK - ▁DUST - ▁CLIMB - ▁FOND - ▁GROWN - ▁DESCEND - ▁RAG - ▁FRUIT - ▁GENERALLY - ▁OFFERED - ▁ER - ▁NURSE - POSE - ▁SPENT - ▁JOIN - ▁STATION - ▁MEANING - ▁SMOKE - HOOD - ▁ROUGH - JU - ▁LIKELY - ▁SURFACE - ▁KE - ▁MONTH - ▁POSSESSION - ▁TONGUE - ▁DUKE - ▁NOSE - ▁LAUGHING - ▁WEATHER - ▁WHISPERED - ▁SYSTEM - ▁LAWS - DDLE - ▁TOUCHED - ▁TRADE - LD - ▁SURPRISED - RIN - ▁ARCH - ▁WEALTH - FOR - ▁TEMPER - ▁FRANK - ▁GAL - ▁BARE - ▁OPPORTUNITY - ▁CLAIM - ▁ANIMALS - ▁REV - ▁COST - ▁WASH - ZE - ▁CORN - ▁OPPOSITE - ▁POLICE - ▁IDEAS - LON - ▁KEY - ▁READING - ▁COLLECT - CHED - ▁H - ▁CROWN - ▁TAR - ▁SWIFT - ▁SHOULDERS - ▁ICE - ▁GRAY - ▁SHARE - ▁PREPARED - ▁GRO - ▁UND - ▁TER - ▁EMPTY - CING - ▁SMILING - ▁AVOID - ▁DIFFERENCE - ▁EXPLAIN - ▁POUR - ▁ATTRACT - ▁OPENING - ▁WHEEL - ▁MATERIAL - ▁BREAST - ▁SUFFERING - ▁DISTINCT - ▁BOOT - ▁ROW - ▁FINGERS - HAN - ▁ALTOGETHER - ▁FAT - ▁PAPA - ▁BRAIN - ▁ASLEEP - ▁GREY - ▁SUM - ▁GAS - ▁WINDOWS - ▁ALIVE - ▁PROCEED - ▁FLOWER - ▁LEAP - ▁PUR - ▁PIECES - ▁ALTER - ▁MEMORY - IENT - ▁FILL - ▁CLO - ▁THROWN - ▁KINGDOM - ▁RODE - IUS - ▁MAID - ▁DIM - ▁BAND - ▁VIRTUE - ▁DISH - ▁GUEST - ▁LOSS - ▁CAUSED - ▁MOTION - ▁POT - ▁MILLION - ▁FAULT - ▁LOVELY - ▁HERO - PPING - ▁UNITED - ▁SPI - SOME - BRA - ▁MOUNTAINS - ▁NU - ▁SATISFIED - ▁DOLLARS - ▁LOVER - ▁CONCEAL - ▁VAST - ▁PULL - ▁HATH - ▁RUSH - ▁J - ▁DESPAIR - EX - ▁HEIGHT - ▁CE - ▁BENT - ▁PITY - ▁RISING - ATH - ▁PRIDE - ▁HURRY - KA - ▁SETTLED - ▁JUSTICE - ▁LIFTED - PEN - ▁SOLDIER - ▁FINDING - ▁REMARK - ▁REGULAR - ▁STRUGGLE - ▁MACHINE - ▁SING - ▁HURRIED - ▁SUFFICIENT - ▁REPRESENT - ▁DOUBLE - ▁ALARM - ▁SUPPER - ▁DREADFUL - ▁FORE - ATOR - ▁STOCK - ▁TIN - ▁EXAMPLE - ▁ROOF - ▁FLOW - ▁SUPPOSED - ▁PRESERV - ▁L - ▁LISTENED - OC - ▁STO - ▁SECURE - ▁FRIGHTENED - ▁DISTURB - ▁EMOTION - ▁SERVANTS - ▁YO - ▁BUY - ▁FORCED - ▁KITCHEN - ▁TERROR - ▁STAIRS - ▁SIXTY - KER - ▁ORDINARY - ▁DIRECTLY - ▁HEADS - ▁METHOD - ▁FORGIVE - ▁AWFUL - ▁REFLECT - ▁GREATLY - ▁TALKED - ▁RIDE - STONE - ▁FAVOUR - ▁WELCOME - ▁SEIZED - OU - ▁CONTROL - ▁ORDERED - ▁ANGEL - ▁USUALLY - ▁POET - ▁BOLD - LINE - ▁ADVENTURE - ▁WATCHING - ▁FOLK - ▁MISTRESS - IZE - ▁GROWING - ▁CAVE - ▁EVIDENCE - ▁FINGER - ▁SEVENTEEN - ▁MOVING - EOUS - ▁DOESN - ▁COW - ▁TYPE - ▁BOIL - ▁TALE - ▁DELIVER - ▁FARM - ▁MONSIEUR - ▁GATHERED - ▁FEELINGS - ▁RATE - ▁REMARKED - ▁PUTTING - ▁MAT - ▁CONTRARY - ▁CRIME - ▁PLA - ▁COL - ▁NEARER - TES - ▁CIVIL - ▁SHAME - ▁LOOSE - ▁DISCOVER - ▁FLAT - ▁TWICE - ▁FAIL - VIS - ▁UNC - EA - ▁EUROPE - ▁PATIENT - ▁UNTO - ▁SUFFER - ▁PAIR - ▁TREASURE - OSE - ▁EAGER - ▁FLY - ▁N - ▁VAL - ▁DAN - ▁SALT - ▁BORE - BBE - ▁ARTHUR - ▁AFFAIRS - ▁SLOW - ▁CONSIST - ▁DEVIL - LAN - ▁AFFECTION - ▁ENGAGED - ▁KISS - ▁YA - ▁OFFICER - IFICATION - ▁LAMP - ▁PARTS - HEN - ▁MILK - ▁PROCESS - ▁GIFT - ▁PULLED - ▁HID - ▁RAY - ▁EXCELLENT - ▁IMPRESSION - ▁AUTHORITY - ▁PROVED - ▁TELLING - TTE - ▁TOWER - ▁CONSEQUENCE - ▁FAVOR - ▁FLEW - ▁CHARLES - ISTS - ▁ADDRESS - ▁FAMILIAR - ▁LIMIT - ▁CONFIDENCE - ▁RARE - ▁WEEKS - ▁WOODS - ▁INTENTION - ▁DIRECT - ▁PERFORM - ▁SOLEMN - ▁DISTANT - ▁IMAGE - ▁PRESIDENT - ▁FIRM - ▁INDIAN - ▁RANK - ▁LIKED - ▁AGREE - ▁HOUSES - ▁WIL - ▁MATTERS - ▁PRISON - ▁MODE - ▁MAJOR - ▁WORKING - ▁SLIP - ▁WEIGHT - ▁AWARE - ▁BUSY - ▁LOOKS - ▁WOUND - ▁THOR - ▁BATH - ▁EXERCISE - ▁SIMILAR - ▁WORE - ▁AMOUNT - ▁QUESTIONS - ▁VIOLENT - ▁EXCUSE - ▁ASIDE - ▁TUR - ▁DULL - OF - ▁EMPEROR - ▁NEVERTHELESS - ▁SHOUT - ▁EXPLAINED - ▁SIZE - ▁ACCOMPLISH - FORD - CAN - ▁MISTAKE - ▁INSTANTLY - ▁SMOOTH - ▁STRIKE - ▁BOB - ISED - ▁HORROR - ▁SCIENCE - ▁PROTEST - ▁MANAGE - ▁OBEY - ▁NECESSITY - ▁SPLENDID - ▁PRESS - ▁INTERESTING - ▁RELIGION - ▁UNKNOWN - ▁FIERCE - ▁DISAPPEARED - ▁HOLY - ▁HATE - ▁PLAYED - ▁LIN - ▁NATURALLY - ▁DROVE - ▁LOUIS - TIES - ▁BRAND - INESS - RIE - ▁SHOOT - ▁CONSENT - ▁SEATED - ▁LINES - GUE - ▁AGREED - ▁CIRCLE - ▁STIR - ▁STREETS - ▁TASK - ▁RID - ▁PRODUCED - ▁ACCIDENT - ▁WITNESS - ▁LIBERTY - ▁DETAIL - ▁MINISTER - ▁POWERFUL - ▁SAVAGE - ▁SIXTEEN - ▁PRETEND - ▁COAST - ▁SQU - ▁UTTER - ▁NAMED - ▁CLEVER - ▁ADMIT - ▁COUPLE - ▁WICKED - ▁MESSAGE - ▁TEMPLE - ▁STONES - ▁YESTERDAY - ▁HILLS - DAY - ▁SLIGHT - ▁DIAMOND - ▁POSSIBLY - ▁AFFAIR - ▁ORIGINAL - ▁HEARING - ▁WORTHY - ▁SELL - NEY - ICK - ▁COTTAGE - ▁SACRIFICE - ▁PROGRESS - ▁SHOCK - ▁DESIGN - ▁SOUGHT - ▁PIT - ▁SUNDAY - ▁OTHERWISE - ▁CABIN - ▁PRAYER - ▁DWELL - ▁GAIN - ▁BRIDGE - ▁PARTICULARLY - ▁YIELD - ▁TREAT - RIGHT - ▁OAK - ▁ROPE - WIN - ▁ORDERS - ▁SUSPECT - ▁EDWARD - AB - ▁ELEVEN - ▁TEETH - ▁OCCURRED - DDING - ▁AMERICA - ▁FALLING - ▁LION - ▁DEPART - ▁KEEPING - ▁DEMAND - ▁PAUSED - ▁CEASED - INA - ▁FUN - ▁CHEER - ▁PARDON - ▁NATIVE - LUS - LOW - ▁DOGS - ▁REQUIRED - ILITY - ▁ELECT - ▁ENTERTAIN - ITUDE - ▁HUGE - ▁CARRYING - ▁BLU - ▁INSIST - ▁SATISFACTION - ▁HUNT - ▁COUNTENANCE - ▁UPPER - ▁MAIDEN - ▁FAILED - ▁JAMES - ▁FOREIGN - ▁GATHER - ▁TEST - BOARD - ▁TERMS - ▁SILK - ▁BEG - ▁BROTHERS - ▁PAGE - ▁KNEES - ▁SHOWN - ▁PROFESSOR - ▁MIGHTY - ▁DEFI - ▁CHARM - ▁REQUIRE - ▁LOG - MORE - ▁PROOF - ▁POSSESSED - ▁SOFTLY - ▁UNFORTUNATE - ▁PRICE - ▁SEVERE - ▁SINGING - ▁STAGE - ▁FREEDOM - ▁SHOUTED - ▁FARTHER - ▁MAJESTY - ▁PREVIOUS - ▁GUIDE - ▁MATCH - ▁CHEST - ▁INTENDED - ▁BI - ▁EXCITEMENT - ▁OFFICERS - ▁SUR - ▁SHAKE - ▁SENTIMENT - ▁GENTLY - ▁SUCCEEDED - ▁MENTION - ▁LOCK - ▁ACQUAINTANCE - ▁IMAGINATION - ▁PHYSICAL - ▁LEADING - ▁SLAVE - ▁CART - ▁POINTED - ▁STEAM - ▁SHADE - ▁PIPE - ▁BASE - ▁INVENT - ▁ALAS - ▁WORKED - ▁REGRET - ▁BUR - ▁FAITHFUL - ▁MENTIONED - ▁RECORD - ▁COMPLAIN - ▁SUPERIOR - ▁BAY - ▁PAL - EMENT - UE - ▁SEVENTY - ▁HOTEL - ▁SHEEP - ▁MEAL - ▁ADVICE - ▁HIDDEN - ▁DEMANDED - ▁CONSCIOUS - ▁BROW - ▁POSSESS - ▁FOURTH - ▁EVENTS - ▁FRI - ▁PRAISE - ▁ADVANCED - ▁RESOLVED - ▁STUFF - ▁CHEERFUL - ▁BIRTH - ▁GRIEF - ▁AFFORD - ▁FAIRY - ▁WAKE - ▁SIDES - ▁SUBSTANCE - ▁ARTICLE - ▁LEVEL - ▁MIST - ▁JOINED - ▁PRACTICAL - ▁CLEARLY - ▁TRACE - ▁AWAKE - ▁OBSERVE - ▁BASKET - ▁LACK - VILLE - ▁SPIRITS - ▁EXCITED - ▁ABANDON - ▁SHINING - ▁FULLY - ▁CALLING - ▁CONSIDERABLE - ▁SPRANG - ▁MILE - ▁DOZEN - ▁PEA - ▁DANGEROUS - ▁WIT - ▁JEW - ▁POUNDS - ▁FOX - ▁INFORMATION - ▁LIES - ▁DECK - NNY - ▁PAUL - ▁STARS - ▁ANGER - ▁SETTLE - ▁WILLING - ▁ADAM - ▁FACES - ▁SMITH - ▁IMPORTANCE - ▁STRAIN - WAR - ▁SAM - ▁FEATHER - ▁SERVED - ▁AUTHOR - ▁PERCEIVED - ▁FLAME - ▁DIVINE - ▁TRAIL - ▁ANYBODY - ▁SIGH - ▁DELICATE - KY - ▁FOLD - ▁HAVEN - ▁DESIRED - ▁CURIOSITY - ▁PRACTICE - ▁CONSIDERATION - ▁ABSOLUTELY - ▁CITIZEN - ▁BOTTLE - ▁INTERESTED - ▁MEAT - ▁OCCUPIED - ▁CHOOSE - ▁THROAT - ETTE - ▁CANDLE - ▁DAWN - ▁PROTECT - ▁SENTENCE - IED - ▁ROCKS - ▁PORTION - ▁APPARENTLY - ▁PRESENTED - ▁TIGHT - ▁ACTUALLY - ▁DYING - ▁HAM - ▁DAILY - ▁SUFFERED - ▁POLITICAL - ▁BODIES - ▁MODERN - ▁COMPLETELY - ▁SOONER - TAN - ▁PROP - ▁ADVANCE - ▁REFUSED - ▁FARMER - ▁POLITE - ▁THUNDER - ▁BRIEF - ▁ELSIE - ▁SAILOR - ▁SUGGESTED - ▁PLATE - ▁AID - ▁FLESH - ▁WEEP - ▁BUCK - ▁ANTI - ▁OCEAN - ▁SPEND - WELL - ▁ODD - ▁GOVERNOR - ▁ENTRANCE - ▁SUSPICION - ▁STEPPED - ▁RAPIDLY - ▁CHECK - ▁HIDE - ▁FLIGHT - ▁CLUB - ▁ENTIRE - ▁INDIANS - ASH - ▁CAPITAL - ▁MAMMA - HAR - ▁CORRECT - ▁CRACK - ▁SENSATION - ▁WORST - ▁PACE - ▁MIDST - ▁AUGUST - ▁PROPORTION - ▁INNOCENT - LINESS - ▁REGARDED - ▁DRIVEN - ORD - ▁HASTE - ▁EDUCATION - ▁EMPLOY - ▁TRULY - ▁INSTRUMENT - ▁MAG - ▁FRAME - ▁FOOLISH - ▁TAUGHT - ▁HANG - ▁ARGUMENT - ▁NINETEEN - ▁ELDER - ▁NAY - ▁NEEDED - ▁NEIGHBOR - ▁INSTRUCT - ▁PAPERS - ▁REWARD - ▁EQUALLY - ▁FIELDS - ▁DIG - HIN - ▁CONDITIONS - JA - ▁SPAR - ▁REQUEST - ▁WORN - ▁REMARKABLE - ▁LOAD - ▁WORSHIP - ▁PARK - ▁KI - ▁INTERRUPTED - ▁SKILL - ▁TERM - LAC - ▁CRITIC - ▁DISTRESS - ▁BELIEF - ▁STERN - IGHT - ▁TRACK - ▁HUNTING - ▁JEWEL - ▁GRADUALLY - ▁GLOW - ▁RUSHED - ▁MENTAL - ▁VISITOR - ▁PICKED - ▁BEHOLD - ▁EXPRESSED - ▁RUB - ▁SKI - ARTAGNAN - ▁MOREOVER - ▁OPERATION - ▁CAREFUL - ▁KEEN - ▁ASSERT - ▁WANDER - ▁ENEMIES - ▁MYSTERIOUS - ▁DEPTH - ▁PREFER - ▁CROSSED - ▁CHARMING - ▁DREAD - ▁FLOUR - ▁ROBIN - ▁TRE - ▁RELIEF - ▁INQUIRED - ▁APPLE - ▁HENCE - ▁WINGS - ▁CHOICE - ▁JUD - OO - ▁SPECIES - ▁DELIGHTED - IUM - ▁RAPID - ▁APPEAL - ▁FAMOUS - ▁USEFUL - ▁HELEN - ▁NEWSPAPER - ▁PLENTY - ▁BEARING - ▁NERVOUS - ▁PARA - ▁URGE - ▁ROAR - ▁WOUNDED - ▁CHAIN - ▁PRODUCE - ▁REFLECTION - ▁MERCHANT - ▁QUARREL - ▁GLORY - ▁BEGUN - ▁BARON - CUS - ▁QUEER - ▁MIX - ▁GAZE - ▁WHISPER - ▁BURIED - ▁DIV - ▁CARD - ▁FREQUENTLY - ▁TIP - ▁KNEE - ▁REGION - ▁ROOT - ▁LEST - ▁JEALOUS - CTOR - ▁SAVED - ▁ASKING - ▁TRIP - QUA - ▁UNION - HY - ▁COMPANIONS - ▁SHIPS - ▁HALE - ▁APPROACHED - ▁HARRY - ▁DRUNK - ▁ARRIVAL - ▁SLEPT - ▁FURNISH - HEAD - ▁PIG - ▁ABSENCE - ▁PHIL - ▁HEAP - ▁SHOES - ▁CONSCIOUSNESS - ▁KINDLY - ▁EVIDENT - ▁SCAR - ▁DETERMIN - ▁GRASP - ▁STEAL - ▁OWE - ▁KNIFE - ▁PRECIOUS - ▁ELEMENT - ▁PROCEEDED - ▁FEVER - ▁LEADER - ▁RISK - ▁EASE - ▁GRIM - ▁MOUNT - ▁MEANWHILE - ▁CENTURY - OON - ▁JUDGMENT - ▁AROSE - ▁VISION - ▁SPARE - ▁EXTREME - ▁CONSTANT - ▁OBSERVATION - ▁THRUST - ▁DELAY - ▁CENT - ▁INCLUD - ▁LIFT - ▁ADMIRE - ▁ISSUE - ▁FRIENDSHIP - ▁LESSON - ▁PRINCIPAL - ▁MOURN - ▁ACCEPTED - ▁BURNING - ▁CAPABLE - ▁EXTRAORDINARY - ▁SANG - ▁REMOVED - ▁HOPED - ▁HORN - ▁ALICE - ▁MUD - ▁APARTMENT - ▁FIGHTING - ▁BLAME - ▁TREMBLING - ▁SOMEBODY - ▁ANYONE - ▁BRIDE - ▁READER - ▁ROB - ▁EVERYWHERE - ▁LABOUR - ▁RECALL - ▁BULL - ▁HIT - ▁COUNCIL - ▁POPULAR - ▁CHAP - ▁TRIAL - ▁DUN - ▁WISHES - ▁BRILLIANT - ▁ASSURED - ▁FORGOT - ▁CONTINUE - ▁ACKNOWLEDG - ▁RETREAT - ▁INCREASED - ▁CONTEMPT - ▁GRANDFATHER - ▁SYMPATHY - ▁GHOST - ▁STRETCHED - ▁CREATURES - ▁CAB - ▁HIND - ▁PLAYING - ▁MISERABLE - ▁MEMBERS - ▁KINDNESS - ▁HIGHEST - ▁PRIM - ▁KISSED - ▁DESERVE - ▁HUT - ▁BEGGED - ▁EIGHTY - ▁CLOSELY - ▁WONDERED - ▁MILITARY - ▁REMIND - ▁ACCORDINGLY - ▁LARGER - ▁MAINTAIN - ▁ENGINE - ▁MOTIVE - ▁DESTROY - ▁STRIP - ▁HANS - ▁AHEAD - ▁INFINITE - ▁PROMPT - ▁INFORMED - TTLE - ▁PEER - ▁PRESSED - ▁TRAP - ▁SOMEWHERE - ▁BOUGHT - ▁VISIBLE - ▁ASHAMED - ▁TEAR - ▁NEIGHBOUR - ▁CONSTITUTION - ▁INTELLIGENCE - ▁PROFESSION - ▁HUNGRY - RIDGE - ▁SMELL - ▁STORIES - ▁LISTENING - ▁APPROACH - ▁STRING - ▁EXPLANATION - ▁IMMENSE - ▁RELIGIOUS - ▁THROUGHOUT - ▁HOLLOW - ▁AWAIT - ▁FLYING - ▁SCREAM - ▁ACTIVE - ▁RUM - ▁PRODUCT - ▁UNHAPPY - ▁VAGUE - ARIES - ▁ELIZABETH - ▁STUPID - ▁DIGNITY - ▁ISABEL - GAR - ▁BRO - ▁PITCH - ▁COMRADE - ▁STIFF - ▁RECKON - ▁SOLD - ▁SPARK - ▁STRO - ▁CRYING - ▁MAGIC - ▁REPEAT - PORT - ▁MARKED - ▁COMFORTABLE - ▁PROJECT - ▁BECOMING - ▁PARENTS - ▁SHELTER - ▁STOLE - ▁HINT - ▁NEST - ▁TRICK - ▁THOROUGHLY - ▁HOSPITAL - ▁WEAPON - ▁ROME - ▁STYLE - ▁ADMITTED - ▁SAFETY - FIELD - ▁UNDERSTANDING - ▁TREMBLE - ▁PRINT - ▁SLAVES - ▁WEARY - ▁ARTIST - ▁CREDIT - BURG - ▁CONCLUSION - ▁SELDOM - ▁UNUSUAL - ▁CLOUDS - ▁UNABLE - ▁GAY - ▁HANGING - ▁SCR - ▁BOWED - ▁DAVID - ▁VOL - ▁PUSHED - ▁ESCAPED - MOND - ▁WARN - ▁BETRAY - ▁EGGS - ▁PLAINLY - ▁EXHIBIT - ▁DISPLAY - ▁MEMBER - ▁GRIN - ▁PROSPECT - ▁BRUSH - ▁BID - ▁SUCCESSFUL - ▁EXTENT - ▁PERSUADE - ▁MID - ▁MOOD - ▁ARRANGED - ▁UNIVERSAL - ▁JIM - ▁SIGNAL - ▁WHILST - ▁PHILIP - ▁WOLF - RATE - ▁EAGERLY - ▁BILLY - ▁RETURNING - ▁CONSCIENCE - ▁FORTUNATE - ▁FEMALE - ▁GLEAM - ▁HASTILY - ▁PROVIDED - ▁OBTAIN - ▁INSTINCT - ▁CONCERNED - ▁CONCERNING - ▁SOMEHOW - ▁PINK - ▁RAGE - ▁ACCUSTOMED - ▁UNCONSCIOUS - ▁ADVISE - ▁BRANCHES - ▁TINY - ▁REFUSE - ▁BISHOP - ▁SUPPLY - ▁PEASANT - ▁LAWYER - ▁WASTE - ▁CONNECTION - ▁DEVELOP - ▁CORRESPOND - ▁PLUM - ▁NODDED - ▁SLIPPED - ▁EU - ▁CONSTANTLY - CUM - MMED - ▁FAIRLY - HOUSE - ▁KIT - ▁RANG - ▁FEATURES - ▁PAUSE - ▁PAINFUL - ▁JOE - ▁WHENCE - ▁LAUGHTER - ▁COACH - ▁CHRISTMAS - ▁EATING - ▁WHOLLY - ▁APART - ▁SUPER - ▁REVOLUTION - ▁LONELY - ▁CHEEKS - ▁THRONE - ▁CREW - ▁ATTAIN - ▁ESTABLISHED - TIME - ▁DASH - ▁FRIENDLY - ▁OPERA - ▁EARL - ▁EXHAUST - ▁CLIFF - ▁REVEAL - ▁ADOPT - ▁CENTRE - ▁MERRY - ▁SYLVIA - ▁IDEAL - ▁MISFORTUNE - ▁FEAST - ▁ARAB - ▁NUT - ▁FETCH - ▁FOUGHT - ▁PILE - ▁SETTING - ▁SOURCE - ▁PERSIST - ▁MERCY - ▁BARK - ▁LUC - ▁DEEPLY - ▁COMPARE - ▁ATTITUDE - ▁ENDURE - ▁DELIGHTFUL - ▁BEARD - ▁PATIENCE - ▁LOCAL - ▁UTTERED - ▁VICTORY - ▁TREATED - ▁SEPARATE - ▁WAG - ▁DRAGG - ▁TITLE - ▁TROOPS - ▁TRIUMPH - ▁REAR - ▁GAINED - ▁SINK - ▁DEFEND - ▁TIED - ▁FLED - ▁DARED - ▁INCREASE - ▁POND - ▁CONQUER - ▁FOREHEAD - ▁FAN - ▁ANXIETY - ▁ENCOUNTER - ▁SEX - ▁HALT - ▁SANK - ▁CHEEK - ▁HUMBLE - ▁WRITER - ▁EMPLOYED - ▁DISTINGUISHED - ▁RAISE - ▁WHIP - ▁GIANT - ▁RANGE - ▁OBTAINED - ▁FLAG - ▁MAC - ▁JUMPED - ▁DISCOVERY - ▁NATIONAL - ▁COMMISSION - ▁POSITIVE - ▁LOVING - ▁EXACT - ▁MURMURED - ▁GAZED - ▁REFER - ▁COLLEGE - ▁ENCOURAGE - ▁NOVEL - ▁CLOCK - ▁MORTAL - ▁ROLLED - ▁RAT - IZING - ▁GUILTY - ▁VICTOR - WORTH - ▁PRA - ▁APPROACHING - ▁RELATIVE - ▁ESTATE - ▁UGLY - ▁METAL - ▁ROBERT - ▁TENT - ▁ADMIRATION - ▁FOURTEEN - ▁BARBAR - ▁WITCH - ELLA - ▁CAKE - ▁SHONE - ▁MANAGED - ▁VOLUME - ▁GREEK - ▁DANCING - ▁WRETCHED - ▁CONDEMN - ▁MAGNIFICENT - ▁CONSULT - J - ▁ORGAN - ▁FLEET - ▁ARRANGEMENT - ▁INCIDENT - ▁MISERY - ▁ARROW - ▁STROKE - ▁ASSIST - ▁BUILD - ▁SUCCEED - ▁DESPERATE - ▁WIDOW - UDE - ▁MARKET - ▁WISDOM - ▁PRECISE - ▁CURRENT - ▁SPOIL - ▁BADE - ▁WOODEN - ▁RESIST - ▁OBVIOUS - ▁SENSIBLE - FALL - ▁ADDRESSED - ▁GIL - ▁COUNSEL - ▁PURCHASE - ▁SELECT - ▁USELESS - ▁STARED - ▁ARREST - ▁POISON - ▁FIN - ▁SWALLOW - ▁BLOCK - ▁SLID - ▁NINETY - ▁SPORT - ▁PROVIDE - ▁ANNA - ▁LAMB - ▁INTERVAL - ▁JUMP - ▁DESCRIBED - ▁STRIKING - ▁PROVISION - ▁PROPOSED - ▁MELANCHOLY - ▁WARRIOR - ▁SUGGEST - ▁DEPARTURE - ▁BURDEN - ▁LIMB - ▁TROUBLED - ▁MEADOW - ▁SACRED - ▁SOLID - ▁TRU - ▁LUCY - ▁RECOVER - ▁ENERGY - ▁POWDER - ▁RESUMED - ▁INTENSE - ▁BRITISH - ▁STRAW - ▁AGREEABLE - ▁EVERYONE - ▁CONCERN - ▁VOYAGE - ▁SOUTHERN - ▁BOSOM - ▁UTTERLY - ▁FEED - ▁ESSENTIAL - ▁CONFINE - ▁HOUSEHOLD - ▁EXTREMELY - ▁WONDERING - ▁LIST - ▁PINE - PHA - ▁EXPERIMENT - ▁JOSEPH - ▁MYSTERY - ▁RESTORE - ▁BLUSH - FOLD - ▁CHOSEN - ▁INTELLECT - ▁CURTAIN - OLOGY - ▁MOUNTED - ▁LAP - ▁EPI - ▁PUNISH - ▁WEDDING - ▁RECOGNIZED - ▁DRIFT - ▁PREPARATION - ▁RESOLUTION - ▁OPPRESS - ▁FIX - ▁VICTIM - OGRAPH - ▁SUMMON - ▁JULIA - ▁FLOOD - ▁WAL - ULATION - ▁SLIGHTLY - ▁LODGE - ▁WIRE - ▁CONFUSION - ▁UNEXPECTED - ▁CONCEIVE - ▁PRIZE - ▁JESUS - ▁ADDITION - ▁RUDE - ▁FATAL - ▁CARELESS - ▁PATCH - ▁KO - ▁CATHERINE - ▁PARLIAMENT - ▁PROFOUND - ▁ALOUD - ▁RELIEVE - ▁PUSH - ABILITY - ▁ACCOMPANIED - ▁SOVEREIGN - ▁SINGULAR - ▁ECHO - ▁COMPOSED - ▁SHAKING - ATORY - ▁ASSISTANCE - ▁TEACHER - ▁HORRIBLE - ▁STRICT - ▁VERSE - ▁PUNISHMENT - ▁GOWN - ▁MISTAKEN - ▁VARI - ▁SWEPT - ▁GESTURE - ▁BUSH - ▁STEEL - ▁AFFECTED - ▁DIRECTED - ▁SURROUNDED - ▁ABSURD - ▁SUGAR - ▁SCRAP - ▁IMMEDIATE - ▁SADDLE - ▁TY - ▁ARISE - ▁SIGHED - ▁EXCHANGE - ▁IMPATIENT - ▁SNAP - ▁EMBRACE - ▁DISEASE - ▁PROFIT - ▁RIDING - ▁RECOVERED - ▁GOVERN - ▁STRETCH - ▁CONVINCED - ▁LEANING - ▁DOMESTIC - ▁COMPLEX - ▁MANIFEST - ▁INDULGE - ▁GENIUS - ▁AGENT - ▁VEIL - ▁DESCRIPTION - ▁INCLINED - ▁DECEIVE - ▁DARLING - ▁REIGN - HU - ▁ENORMOUS - ▁RESTRAIN - ▁DUTIES - BURY - TTERED - ▁POLE - ▁ENABLE - ▁EXCEPTION - ▁INTIMATE - ▁COUNTESS - ▁TRIBE - ▁HANDKERCHIEF - ▁MIDNIGHT - ▁PROBLEM - ▁TRAMP - ▁OIL - CAST - ▁CRUSH - ▁DISCUSS - ▁RAM - ▁TROT - ▁UNRE - ▁WHIRL - ▁LOCKED - ▁HORIZON - ▁OFFICIAL - ▁SCHEME - ▁DROWN - ▁PIERRE - ▁PERMITTED - ▁CONNECTED - ▁ASSURE - ▁COCK - ▁UTMOST - ▁DEVOTED - ▁RELI - ▁SUFFICIENTLY - ▁INTELLECTUAL - ▁CARPET - ▁OBJECTION - ▁AFTERWARD - ▁REALITY - ▁NEGRO - ▁RETAIN - ▁ASCEND - ▁CEASE - ▁KATE - ▁MARVEL - KO - ▁BOND - MOST - ▁COAL - GATE - ▁IGNORANT - ▁BREAKING - ▁TWIN - ▁ASTONISHMENT - ▁COFFEE - ▁JAR - ▁CITIES - ▁ORIGIN - ▁EXECUT - ▁FINAL - ▁INHABITANTS - ▁STABLE - ▁CHIN - ▁PARTIES - ▁PLUNGE - ▁GENEROUS - ▁DESCRIBE - ▁ANNOUNCED - ▁MERIT - ▁REVERE - ▁ERE - ACIOUS - ZI - ▁DISAPPOINT - ▁SUGGESTION - ▁DOUBTLESS - ▁TRUNK - ▁STAMP - ▁JOB - ▁APPOINTED - ▁DIVIDED - ▁ACQUAINTED - CHI - ▁ABSOLUTE - ▁FEARFUL - ▁PRIVILEGE - ▁CRAFT - ▁STEEP - ▁HUNTER - ▁FORBID - ▁MODEST - ▁ENDEAVOUR - ▁SWEEP - ▁BEHELD - ▁ABSORB - ▁CONSTRUCT - ▁EMPIRE - ▁EXPEDITION - ▁ERECT - ▁OFFEND - ▁INTEND - ▁PERMIT - ▁DESTROYED - ▁CONTRACT - ▁THIRST - ▁WAGON - ▁EVA - ▁GLOOM - ▁ATMOSPHERE - ▁RESERVE - ▁VOTE - ▁GER - ▁NONSENSE - ▁PREVAIL - ▁QUALITY - ▁CLASP - ▁CONCLUDED - ▁RAP - ▁KATY - ▁ETERNAL - ▁MUTTERED - ▁NEGLECT - ▁SQUIRE - ▁CREEP - LOCK - ▁ELECTRIC - ▁HAY - ▁EXPENSE - ▁SCORN - ▁RETIRED - ▁STOUT - ▁MURMUR - ▁SHARPLY - ▁DISTRICT - ▁LEAF - ▁FAILURE - WICK - ▁JEAN - ▁NUMEROUS - ▁INFANT - ▁REALIZED - ▁TRAVELLER - ▁HUNGER - ▁JUNE - ▁MUN - ▁RECOMMEND - ▁CREP - ZZLE - ▁RICHARD - WORK - ▁MONTE - ▁PREACH - ▁PALM - AVI - ▁ANYWHERE - ▁DISPOSITION - ▁MIRROR - ▁VENTURE - ▁POUND - ▁CIGAR - ▁INVITED - ▁BENCH - ▁PROTECTION - ▁BENEFIT - ▁THOMAS - ▁CLERK - ▁REPROACH - ▁UNIFORM - ▁GENERATION - ▁SEAL - ▁COMPASS - ▁WARNING - ▁EXTENDED - ▁DIFFICULTIES - ▁MAYBE - ▁GROAN - ▁AFFECT - ▁COMB - ▁EARN - ▁WESTERN - ▁IDLE - ▁SCORE - ▁TAP - ▁ASTONISHED - ▁INTRODUCED - ▁LEISURE - ▁LIEUTENANT - ▁VIOLENCE - ▁FIRMLY - ▁MONSTER - ▁UR - ▁PROPERLY - ▁TWIST - ▁PIRATE - ▁ROBBER - ▁BATTER - ▁WEPT - ▁LEANED - ▁FOG - ▁ORNAMENT - ▁ANDREW - ▁BUSHES - ▁REPUBLIC - ▁CONFIDENT - ▁LEAN - ▁DART - ▁STOOP - ▁CURL - ▁COUNTER - ▁NORTHERN - ▁PEARL - ▁NEAREST - ▁FRANCIS - ▁WANDERING - ▁FREQUENT - ▁STARTLED - ▁STATEMENT - ▁OCCUR - ▁BLOOM - ▁NERVE - ▁INSPECT - ▁INDUCE - ▁FLATTER - ▁DATE - ▁AMBITION - ▁SLOPE - ▁MALE - ▁MADAM - ▁MONK - ▁RENT - ▁CONFIRM - ▁INVESTIGAT - ▁RABBIT - ▁REGIMENT - ▁SUBMIT - ▁SPELL - ▁FURIOUS - ▁RAIL - ▁BESTOW - ▁RALPH - ▁SCATTERED - ▁COMPELLED - ▁THREAD - ▁CHILL - ▁DENY - ▁PRONOUNC - ▁MANKIND - ▁CATTLE - ▁EXECUTION - ▁REBEL - ▁SUPREME - ▁VALUABLE - ▁LIKEWISE - ▁CONVEY - ▁TIDE - ▁GLOOMY - ▁COIN - ▁ACTUAL - ▁TAX - ▁PROVINCE - ▁GRATEFUL - ▁SPIRITUAL - ▁VANISHED - ▁DIANA - ▁HAUNT - ▁DRAGON - ▁CRAWL - ▁CHINA - ▁GRATITUDE - ▁NEAT - ▁FINISH - ▁INTENT - ▁FRIGHT - ▁EMBARRASS - ▁THIRTEEN - ▁RUTH - ▁SLIGHTEST - ▁DEVELOPMENT - ▁INTERVIEW - ▁SPECTACLE - ▁BROOK - VIE - ▁WEAKNESS - ▁AUDIENCE - ▁CONSEQUENTLY - ▁ABROAD - ▁ASPECT - ▁PAINTED - ▁RELEASE - ▁INSULT - ▁SOOTH - ▁DISAPPOINTMENT - ▁EMERG - ▁BRIG - ▁ESTEEM - ▁INVITATION - ▁PASSENGER - ▁PUBLISH - ▁PIANO - ▁IRISH - ▁DESK - ▁BEATEN - ▁FIFTH - ▁IMPULSE - ▁SWEAR - ▁EATEN - ▁PURPLE - ▁COMMITTED - ▁COUNTRIES - ▁PERCEIVE - ISON - ▁CELEBRAT - ▁GRANDMOTHER - ▁SHUDDER - ▁SUNSHINE - ▁SPANISH - ▁HITHERTO - ▁MARILLA - ▁SNAKE - ▁MOCK - ▁INTERFERE - ▁WALTER - ▁AMID - ▁MARBLE - ▁MISSION - TERIOR - ▁DRIVING - ▁FURNITURE - ▁STEADY - ▁CIRCUMSTANCE - ▁INTERPRET - ▁ENCHANT - ▁ERROR - ▁CONVICTION - ▁HELPLESS - ▁MEDICINE - ▁QUALITIES - ▁ITALIAN - ▁HASTENED - ▁OCCASIONALLY - ▁PURSUED - ▁HESITATED - ▁INDEPENDENT - ▁OLIVER - ▁LINGER - UX - ▁EXAMINED - ▁REPENT - ▁PHYSICIAN - ▁CHASE - ▁BELOVED - ▁ATTACHED - ▁FLORENCE - ▁HONEY - ▁MOUSE - ▁CRIES - ▁BAKE - ▁POEM - ▁DESTRUCTION - ▁FULFIL - ▁MESSENGER - ▁TRISTRAM - ▁FANCIED - ▁EXCESS - ▁CURSE - ▁CHU - ▁QUANTITY - ▁THORNTON - ▁CREATED - ▁CONTINUALLY - ▁LIGHTNING - ▁BORNE - ▁TOTAL - ▁DISPOSED - ▁RIFLE - ▁POLLY - ▁GOAT - ▁BACKWARD - ▁VIRGINIA - ▁KICK - ▁PERIL - ▁QUO - ▁GLORIOUS - ▁MULTITUDE - ▁LEATHER - ▁ABSENT - ▁DEMON - ▁DEBT - ▁TORTURE - ▁ACCORD - ▁MATE - ▁CATHOLIC - ▁PILL - ▁LIBRARY - ▁PURSUIT - ▁SHIRT - ▁DEAREST - ▁COLLAR - ▁BEACH - ▁ROBE - ▁DECLARE - ▁BRANCH - ▁TEMPT - ▁STEADILY - ▁DISGUST - ▁SILLY - ▁ARRIVE - ▁DRANK - ▁LEVI - ▁COMMUNICAT - ▁RACHEL - ▁WASHINGTON - ▁RESIGN - ▁MEANTIME - ▁LACE - ▁ENGAGEMENT - ▁QUIVER - ▁SEPARATED - ▁DISCUSSION - ▁VENTURED - ▁SURROUNDING - ▁POLISH - ▁NAIL - ▁SWELL - ▁JOKE - ▁LINCOLN - ▁STUDENT - ▁GLITTER - ▁RUSSIAN - ▁READILY - ▁CHRIS - ▁POVERTY - ▁DISGRACE - ▁CHEESE - ▁HEAVILY - ▁SCALE - ▁STAFF - ▁ENTREAT - ▁FAREWELL - ▁LUNCH - ▁PEEP - ▁MULE - ▁SOMEONE - ▁DISAPPEAR - ▁DECISION - ▁PISTOL - ▁PUN - ▁SPUR - ▁ASSUMED - ▁EXTEND - ▁ENTHUSIASM - ▁DEFINITE - ▁UNDERTAKE - ▁COMMITTEE - ▁SIMON - ▁FENCE - ▁APPLIED - ▁RELATED - ▁VICE - ▁UNPLEASANT - ▁PROBABLE - ▁PROCURE - ▁FROWN - ▁CLOAK - ▁HUMANITY - ▁FAMILIES - ▁PHILOSOPHER - ▁DWARF - ▁OVERCOME - ▁DEFEAT - ▁FASTENED - ▁MARSH - ▁CLASSES - ▁TOMB - ▁GRACIOUS - ▁REMOTE - ▁CELL - ▁SHRIEK - ▁RESCUE - ▁POOL - ▁ORGANIZ - ▁CHOSE - ▁CUTTING - ▁COWARD - ▁BORDER - ▁DIRTY - ▁MONKEY - ▁HOOK - ▁CHUCK - ▁EMILY - ▁JEST - ▁PLAC - ▁WEIGH - ▁ASSOCIATE - ▁GLIMPSE - ▁STUCK - ▁BOLT - ▁MURDERER - ▁PONY - ▁DISTINGUISH - ▁INSTITUTION - ▁CUNNING - ▁COMPLIMENT - ▁APPETITE - ▁REPUTATION - ▁FEEBLE - ▁KIN - ▁SERIES - ▁GRACEFUL - ▁PLATFORM - ▁BREEZE - ▁PHRASE - ▁CLAY - MONT - ▁RATTL - ▁OPPOSITION - ▁LANE - ▁BOAST - ▁GROWTH - ▁INCLINATION - ▁BEHAVE - ▁SUSAN - ▁DISTINCTION - ▁DISLIKE - ▁NICHOLAS - ▁SATISFY - ▁DRAMA - ▁ELBOW - ▁GAZING - ▁CONSUM - ▁SPIN - ▁OATH - ▁CHANNEL - ▁CHARACTERISTIC - ▁SPEAR - ▁SLAIN - ▁SAUCE - ▁FROG - ▁CONCEPTION - ▁TIMID - ▁ZEAL - ▁APPARENT - SHIRE - ▁CENTER - ▁VARIETY - ▁DUSK - ▁APT - ▁COLUMN - ▁REVENGE - ▁RIVAL - ▁IMITAT - ▁PASSIONATE - ▁SELFISH - ▁NORMAN - ▁REPAIR - ▁THRILL - ▁TREATMENT - ▁ROSA - ▁MARTIN - ▁INDIFFERENT - ▁THITHER - ▁GALLANT - ▁PEPPER - ▁RECOLLECT - ▁VINE - ▁SCARCE - ▁SHIELD - ▁MINGLED - CLOSE - ▁HARSH - ▁BRICK - ▁HUMOR - ▁MISCHIEF - ▁TREMENDOUS - ▁FUNCTION - ▁SMART - ▁SULTAN - ▁DISMISS - ▁THREATENED - ▁CHEAP - ▁FLOCK - ▁ENDEAVOR - ▁WHISK - ▁ITALY - ▁WAIST - ▁FLUTTER - ▁SMOKING - ▁MONARCH - ▁AFRICA - ▁ACCUSE - ▁HERBERT - ▁REFRESH - ▁REJOICE - ▁PILLOW - ▁EXPECTATION - ▁POETRY - ▁HOPELESS - ▁PERISH - ▁PHILOSOPHY - ▁WHISTLE - ▁BERNARD - ▁LAMENT - ▁IMPROVE - ▁SUP - ▁PERPLEX - ▁FOUNTAIN - ▁LEAGUE - ▁DESPISE - ▁IGNORANCE - ▁REFERENCE - ▁DUCK - ▁GROVE - ▁PURSE - ▁PARTNER - ▁PROPHET - ▁SHIVER - ▁NEIGHBOURHOOD - ▁REPRESENTATIVE - SAIL - ▁WIP - ▁ACQUIRED - ▁CHIMNEY - ▁DOCTRINE - ▁MAXIM - ▁ANGLE - ▁MAJORITY - ▁AUTUMN - ▁CONFUSED - ▁CRISTO - ▁ACHIEVE - ▁DISGUISE - ▁REDUCED - ▁EARLIER - ▁THEATRE - ▁DECIDE - MINATED - OLOGICAL - ▁OCCUPATION - ▁VIGOROUS - ▁CONTINENT - ▁DECLINE - ▁COMMUNITY - ▁MOTIONLESS - ▁HATRED - ▁COMMUNICATION - ▁BOWL - ▁COMMENT - ▁APPROVE - ▁CEREMONY - ▁CRIMINAL - ▁SCIENTIFIC - ▁DUCHESS - ▁VIVID - ▁SHIFT - ▁AVAIL - ▁DAMP - ▁JOHNSON - ▁SLENDER - ▁CONTRAST - ▁AMUSEMENT - ▁PLOT - ▁LYN - ▁ASSOCIATION - ▁SNATCH - ▁UNCERTAIN - ▁PRESSURE - ▁PERCH - ▁APPLY - ▁PLANET - ▁NOTWITHSTANDING - ▁SWUNG - ▁STIRRED - ▁ATTENDANT - ▁ENJOYMENT - ▁WORRY - ▁ALBERT - ▁NAKED - ▁TALENT - ▁MARIAN - ▁REFORM - ▁DELIBERATE - ▁INTELLIGENT - ▁SENSITIVE - ▁YONDER - ▁PUPIL - ▁FRIGHTFUL - ▁DOUBTFUL - ▁STANDARD - ▁MAGISTRATE - ▁SHEPHERD - ▁STOMACH - ▁DEPOSIT - ▁RENEW - ▁HEDGE - ▁FRANCS - ▁POSSIBILITY - ▁RESEMBLE - ▁FATIGUE - ▁PORTRAIT - ▁FAVORITE - ▁CREAM - ▁BURG - ▁SECRETARY - ▁DIVERS - ▁ACTIVITY - ▁SPECULAT - ▁HUMOUR - ▁FITTED - ▁EXTERNAL - ▁CETERA - ▁WRAPPED - ▁WHIT - ▁FRED - ▁EXAMINATION - ▁LODGING - ▁OWING - ▁JAW - ▁CROW - ▁BALANCE - ▁PUFF - ▁TENDERNESS - ▁PORTHOS - ▁ANCHOR - ▁INTERRUPT - ▁NECESSARILY - ▁PERPETUAL - ▁AGONY - ▁POPE - ▁SCHOLAR - ▁SCOTLAND - ▁SUPPRESS - ▁WRATH - ▁WRECK - ▁EXCEED - ▁PERFECTION - ▁INDIA - ▁TRADITION - ▁SECTION - ▁EASTERN - ▁DOORWAY - ▁WIVES - ▁CONVENTION - ▁ANNOUNC - ▁EGYPT - ▁CONTRADICT - ▁SCRATCH - ▁CENTRAL - ▁GLOVE - ▁WAX - ▁PREPARE - ▁ACCOMPANY - ▁INCREASING - ▁LIBERAL - ▁RAISING - ▁ORANGE - ▁SHOE - ▁ATTRIBUTE - ▁LITERATURE - ▁PUZZLED - ▁WITHDRAW - ▁WHITHER - ▁HAWK - ▁MOONLIGHT - ▁EXAMINE - ▁HAPPILY - ▁PRECEDE - ▁DETECTIVE - ▁INCHES - ▁SOLITARY - ▁DUTCH - ▁NAPOLEON - ▁UNEASY - ▁CARDINAL - ▁BLEW - ▁FOWL - ▁DECORAT - ▁CHILDHOOD - ▁TORMENT - ▁LOSING - ▁PERMISSION - ▁BLANK - ▁UPSTAIRS - ▁CAPACITY - ▁TRIFLE - ▁FOLLY - ▁RECOGNIZE - ▁REMOVE - ▁VENGEANCE - ▁ENTERPRISE - ▁BEDROOM - ▁ANYHOW - ▁INQUIRY - ▁ASHES - ▁DRAG - ▁HUSH - ▁AWKWARD - ▁SATURDAY - ▁GENUINE - ▁SURVIV - ▁SKIRT - ▁AFFECTIONATE - ▁TANG - ▁MUTUAL - ▁DISPUTE - ▁EAGLE - ▁INCOME - ▁BIND - ▁FAME - ▁IMPROVEMENT - ROVING - ▁DIFFER - ▁AWOKE - ▁SLEEVE - ▁SOLITUDE - ▁FAVOURITE - JI - ▁DETECT - ▁COMPREHEND - ▁PREPARING - ▁SERPENT - ▁SUMMIT - ▁KNOT - ▁KNIT - ▁COPY - ▁STOPPING - ▁FADED - ▁HIDEOUS - ▁JULIE - STEAD - ▁SHINE - ▁CONFLICT - ▁PROPOSITION - ▁REFUGE - ▁GALLERY - ▁BUNDLE - ▁AXE - ▁SLAVERY - ▁MASK - ▁ALYOSHA - ▁LADDER - ▁DEPARTMENT - ▁DISCHARGE - ▁DEPRESS - ▁GALLOP - ▁SCARLET - ▁KITTY - ▁RECEIVING - ▁SURRENDER - ▁SUSTAIN - ▁TWILIGHT - ▁CONGRESS - ▁IRELAND - ▁FUNNY - ▁LEND - ▁CONSTITUTE - ▁FUNERAL - ▁CRYSTAL - ▁SPAIN - ▁EXCEEDINGLY - ▁DAMN - ▁COMMUN - ▁CIVILIZATION - ▁PREJUDICE - ▁PORCH - ▁ASSISTANT - ▁INDUSTRY - ▁TUMBLE - ▁DEFENCE - ▁HITHER - ▁SMOT - ▁COLONI - ▁AMAZEMENT - ▁MARGUERITE - ▁MIRACLE - ▁INHERIT - ▁BEGGAR - ▁ENVELOPE - ▁INDIGNATION - ▁NATASHA - ▁PROPOSAL - ▁FRAGMENT - ▁ROUSED - ▁ROAST - ENCIES - ▁COMMENCED - ▁RESOURCE - ▁POPULATION - ▁QUOTH - ▁PURSUE - ▁EDUCAT - ▁AFFLICT - ▁CONTACT - ▁CRIMSON - ▁DIVISION - ▁DISORDER - ▁COPPER - ▁SOLICIT - ▁MODERATE - ▁DRUM - ▁SWIM - ▁SALUTE - ▁ASSUME - ▁MUSCLE - ▁OVERWHELM - ▁SHAKESPEARE - ▁STRUGGLING - ▁TRANQUIL - ▁CHICKEN - ▁TREAD - ▁CLAW - ▁BIBLE - ▁RIDGE - ▁THREAT - ▁VELVET - ▁EXPOSED - ▁IDIOT - ▁BARREL - ▁PENNY - ▁TEMPTATION - ▁DANGLARS - ▁CENTURIES - ▁DISTRIBUT - ▁REJECT - ▁RETORTED - ▁CONCENTRAT - ▁CORDIAL - ▁MOTOR - ▁CANNON - KEEP - ▁WRETCH - ▁ASSURANCE - ▁THIEF - ▁SURVEY - ▁VITAL - ▁RAILWAY - ▁JACKSON - ▁CRASH - ▁GROWL - ▁COMBAT - ▁RECOLLECTION - ▁SECURITY - ▁JACOB - ▁CLUTCH - ▁BLANKET - ▁NANCY - ▁CELLAR - ▁CONVENIENT - ▁INDIGNANT - ▁COARSE - ▁WORM - ▁SCREEN - ▁TRANSPORT - ▁BULLET - ▁APPRECIATE - ▁DEVOTION - ▁INVISIBLE - ▁DRIED - ▁MIXTURE - ▁CANDID - ▁PERFORMANCE - ▁RIPE - ▁EXQUISITE - ▁BARGAIN - ▁TOBACCO - ▁LOYAL - ▁MOULD - ▁ATTENTIVE - ▁DOROTHY - ▁BRUTE - ▁ESTABLISHMENT - ▁ABILITY - ▁INHABIT - ▁OBSCURE - ▁BORROW - ▁ESSENCE - ▁DISMAY - ▁FLEE - ▁BLADE - ▁PLUCK - ▁COFFIN - ▁SUNSET - ▁STEPHEN - ▁ECONOMIC - ▁HOLIDAY - ▁MECHANICAL - ▁COTTON - ▁AWAKENED - ▁SEIZE - ▁RIDICULOUS - ▁SANCHO - ▁HESITATION - ▁CORPSE - ▁SAVING - HOLD - FOOT - ▁ELDEST - ▁DESPITE - ▁EDITH - ▁CHERISH - ▁RESISTANCE - ▁WILSON - ▁ARGUE - ▁INQUIRE - ▁APPREHENSION - ▁AVENUE - ▁DRAKE - ▁PROPOSE - HURST - ▁INFERIOR - ▁STAIRCASE - ▁WHEREFORE - ▁CARLYLE - ▁COUCH - ▁ROUTE - ▁POLITICS - ▁TOMORROW - ▁THRONG - ▁NAUGHT - ▁SUNLIGHT - ▁INDIFFERENCE - ▁OBEDIENCE - ▁RECEPTION - ▁VEGETABLE - ▁IMPERFECT - ▁RESIDENCE - ▁TURKEY - ▁VIOLET - ▁SARAH - ▁ALTAR - ▁GRIEVE - ▁JERK - ▁ENSU - ▁MAGICIAN - ▁BLOSSOM - ▁LANTERN - ▁RESOLUTE - ▁THOUGHTFULLY - ▁FORTNIGHT - ▁TRUMPET - ▁VALJEAN - ▁UNWILLING - ▁LECTURE - ▁WHEREUPON - ▁HOLLAND - ▁CHANGING - ▁CREEK - ▁SLICE - ▁NORMAL - ▁ANNIE - ▁ACCENT - ▁FREDERICK - ▁DISAGREEABLE - ▁RUBBED - ▁DUMB - ▁ESTABLISH - ▁IMPORT - ▁AFFIRM - ▁MATTHEW - ▁BRISK - ▁CONVERT - ▁BENDING - ▁IVAN - ▁MADEMOISELLE - ▁MICHAEL - ▁EASIER - ▁JONES - ▁FACING - ▁EXCELLENCY - ▁LITERARY - ▁GOSSIP - ▁DEVOUR - ▁STAGGER - ▁PENCIL - ▁AVERAGE - ▁HAMMER - ▁TRIUMPHANT - ▁PREFERRED - ▁APPLICATION - ▁OCCUPY - ▁AUTHORITIES - BURN - ▁ASCERTAIN - ▁CORRIDOR - ▁DELICIOUS - ▁PRACTISE - ▁UNIVERSE - ▁SHILLING - ▁CONTEST - ▁ASHORE - ▁COMMIT - ▁ADMINISTRATION - ▁STUDIED - ▁RIGID - ▁ADORN - ▁ELSEWHERE - ▁INNOCENCE - ▁JOURNAL - ▁LANDSCAPE - ▁TELEGRAPH - ▁ANGRILY - ▁CAMPAIGN - ▁UNJUST - ▁CHALLENGE - ▁TORRENT - ▁RELATE - ▁ASSEMBLED - ▁IMPRESSED - ▁CANOE - ▁CONCLUD - ▁QUIXOTE - ▁SATISFACTORY - ▁NIECE - ▁DEAF - ▁RAFT - ▁JIMMY - ▁GLID - ▁REGULAT - ▁CHATTER - ▁GLACIER - ▁ENVY - ▁STATUE - ▁BOSTON - ▁RICHMOND - ▁DENIED - ▁FANNY - ▁SOLOMON - ▁VULGAR - ▁STALK - ▁REPLACE - ▁SPOON - ▁BASIN - ▁FEATURE - ▁CONVICT - ▁ARCHITECT - ▁ADMIRAL - ▁RIBBON - ▁PERMANENT - ▁APRIL - ▁JOLLY - ▁NEIGHBORHOOD - ▁IMPART - BOROUGH - CAMP - ▁HORRID - ▁IMMORTAL - ▁PRUDENCE - ▁SPANIARD - ▁SUPPOSING - ▁TELEPHONE - ▁TEMPERATURE - ▁PENETRATE - ▁OYSTER - ▁APPOINTMENT - ▁EGYPTIAN - ▁DWELT - ▁NEPHEW - ▁RAILROAD - ▁SEPTEMBER - ▁DEVICE - ▁WHEAT - ▁GILBERT - ▁ELEGANT - ▁ADVERTISE - ▁RATIONAL - ▁TURTLE - ▁BROOD - ▁ASSEMBLY - ▁CULTIVATE - ▁EDITOR - ▁SPECIMEN - ▁UNDOUBTEDLY - ▁WHALE - ▁DROPPING - ▁BALLOON - ▁MEDICAL - COMB - ▁COMPOSITION - ▁FOOTSTEPS - ▁LAUNCELOT - ▁DISCOURSE - ▁ERRAND - ▁CONVERSE - ▁ADVANCING - ▁DOWNSTAIRS - ▁TUMULT - ▁CORRUPT - ▁SUFFICE - ▁ANGUISH - ▁SHAGGY - ▁RETIRE - ▁TIMBER - ▁BLAZE - ▁ABSTRACT - ▁EMBROIDER - ▁PHOTOGRAPH - ▁PROSPERITY - ▁TERRIBLY - ▁TERRITORY - ▁THRESHOLD - ▁PAVEMENT - ▁INJURED - ▁LIMP - ▁AGITATION - ▁RASCAL - ▁PRESUME - ▁OBSERVING - ▁OBSTACLE - ▁SIMPLICITY - ▁SLUMBER - ▁SUPPLIED - ▁COMBINATION - ▁DRAIN - ▁WILDERNESS - ▁BELIEVING - ▁VILLAIN - ▁RECKLESS - ▁INJURY - ▁CLAPP - ▁FRIDAY - ▁HERCULES - ▁KENNEDY - ▁SYMPTOM - ▁SLEDGE - ▁CEILING - ▁LEMON - ▁PLAGUE - ▁MONDAY - ▁CANVAS - ▁IMPATIENCE - ▁UNCOMFORTABLE - ▁ACCESS - ▁FROZEN - ▁SENATOR - ▁FRANZ - ▁SWIMMING - ▁BARRIER - ▁ADJUST - ▁COMPARISON - ▁PROCLAIM - ▁WRINKL - ▁OVERLOOK - ▁MITYA - ▁GUILT - ▁PERCEPTION - ▁PRECAUTION - ▁SPECTATOR - ▁SURPRISING - ▁DISTRACT - ▁DISDAIN - ▁BONNET - ▁MAGNET - ▁PROFESS - ▁CONFOUND - ▁NARRATIVE - ▁STRUCTURE - ▁SKETCH - ▁ULTIMATE - ▁GLOBE - ▁INSECT - FICIENCY - ▁ORCHARD - ▁AMIABLE - ▁DESCENT - ▁INDEPENDENCE - ▁MANUFACTURE - ▁SPRINKLE - ▁NIGHTINGALE - ▁CUSHION - ▁EMINENT - ▁SCOTT - ▁ARRAY - ▁COSETTE - ▁WAVING - ▁EXTRACT - ▁IRREGULAR - ▁PERSECUT - ▁DERIVED - ▁WITHDREW - ▁CAUTION - ▁SUSPICIOUS - ▁MEMORIES - ▁NOWHERE - ▁SUBTLE - ▁THOROUGH - Q - ▁APPROPRIATE - ▁SLAUGHTER - ▁YOURSELVES - ▁THUMB - ▁TWAS - ▁ABODE - ▁BIDDING - ▁CONSPICUOUS - ▁REBECCA - ▁SERGEANT - ▁APRON - ▁ANTICIPATE - ▁DISCIPLINE - ▁GLANCING - ▁PILGRIM - ▁SULLEN - ▁CONTRIBUTE - ▁PRAIRIE - ▁CARVED - ▁COMMERCE - ▁EXCLAMATION - ▁MUSCULAR - ▁NOVEMBER - ▁PHENOMENA - ▁SYMBOL - ▁UMBRELLA - ▁DIMINISH - ▁PARLOUR - ▁THREATENING - ▁STUMP - ▁EXTENSIVE - ▁PLEASING - ▁REMEMBRANCE - ▁COMBINED - ▁SHERIFF - ▁SHAFT - ▁LAURA - ▁INTERCOURSE - ▁STRICKEN - ▁SUPPLIES - ▁LANDLORD - ▁SHRINK - ▁PRICK - ▁CAESAR - ▁DRUG - ▁BEWILDERED - ▁NAUTILUS - ▁BRUTAL - ▁COMMERCIAL - ▁MAGGIE - ▁SPHERE - ▁VIRGIN - ▁BRETHREN - ▁DESTINY - ▁POLICY - ▁TERRIFIED - ▁HOUSEKEEPER - ▁CRAZY - ▁ARDENT - ▁DISCERN - ▁WRAP - ▁MARQUIS - ▁RUSSIA - MOUTH - ▁BRITAIN - ▁HARBOUR - ▁CONCERT - ▁DONKEY - ▁DAMAGE - ▁SLIM - ABOUT - ▁LUXURY - ▁MONSTROUS - ▁TENDENCY - ▁PARADISE - ▁CULTURE - ▁JULIUS - ▁RAOUL - ▁REMEDY - ▁DECAY - ▁SCOLD - ▁SPLIT - ▁ASSAULT - ▁DECEMBER - ▁MOSCOW - ▁EXPLORE - ▁TROUSERS - ▁WRIST - PIECE - ▁MUSKET - ▁VALENTINE - ▁TYRANT - ▁ABRAHAM - ▁MEDIUM - ▁ARTIFICIAL - ▁FACULTY - ▁OBLIGATION - ▁RESEMBLANCE - ▁INQUIRIES - ▁DETAIN - ▁SWARM - ▁PLEDGE - ▁ADMIRABLE - ▁DEFECT - ▁SUPERINTEND - ▁PATRIOT - ▁CLUNG - ▁DISMAL - ▁RECIT - ▁IGNOR - ▁AMELIA - ▁JUSTIFY - ▁ELEPHANT - ▁ESTIMATE - ▁KNELT - ▁SERVING - ▁WHIM - ▁SHRILL - ▁STUDIO - ▁TEXT - ▁ALEXANDER - ▁WROUGHT - ▁ABUNDANT - ▁SITUATED - ▁REGAIN - ▁FIERY - ▁SNEER - ▁SWEAT - ▁GLARE - ▁NIGH - ▁ESCORT - ▁INEVITABLE - ▁PSMITH - ▁RELUCTANT - ▁PRECEDING - ▁RESORT - ▁OUTRAGE - ▁AMBASSADOR - ▁CONSOLATION - ▁RECOGNITION - ▁REMORSE - ▁BEHALF - ▁FORMIDABLE - ▁GRAVITY - ▁DIVIDE - ▁CONFRONT - ▁GIGANTIC - ▁OCTOBER - ▁FLANK - ▁SLEW - ▁CLARA - ▁FILM - ▁BULK - ▁POMP - ▁ELEANOR - ▁EMPHASIS - ▁JAPANESE - ▁CAVALRY - ▁EXCLUSIVE - ▁PERFUME - ▁BRONZE - ▁FEDERAL - ▁LIQUID - ▁RUBBING - ▁OVEN - DOLPH - ▁CONVULS - ▁DEPRIVED - ▁RESPONSIBILITY - ▁SIGNIFICANT - ▁WAISTCOAT - ▁CLUSTER - ▁MARTHA - ▁REVERSE - ▁ATTORNEY - ▁DROOP - ▁SKILFUL - ▁HABITUAL - ▁PUMP - ▁INTERVEN - ▁OWL - ▁CONJECTURE - ▁FANTASTIC - ▁RESPONSIBLE - ▁DESTINED - ▁DOCUMENT - ▁THEREUPON - ▁GODDESS - ▁PACIFIC - ▁WARRANT - ▁COSTUME - ▁BRIDLE - ▁CALIFORNIA - ▁DEMOCRATIC - ▁EUSTACE - ▁SQUIRREL - ▁UNCOMMON - ▁MARVELLOUS - ▁PLOUGH - ▁TRAGEDY - ▁VAULT - ▁HESITATE - ▁REFRAIN - ▁ADMIRING - ▁CORPORAL - ▁ENTITLED - ▁SHREWD - ▁SQUEEZ - ▁ACCURATE - ▁TEMPEST - ▁MONUMENT - ▁SIEGE - ▁CHINESE - ▁RAVEN - ▁LOUNG - ▁ASSASSIN - ▁INFLICT - ▁AGITATED - ▁DESIRABLE - ▁EARLIEST - ▁LAUNCH - ▁PILOT - ▁PULSE - ▁MUTE - LEIGH - ▁LIQUOR - ▁SCARECROW - ▁SKULL - ▁DESOLATE - ▁SUBLIME - ▁SERENE - ▁RECESS - ▁WAKING - ▁CHARLOTTE - ▁CIRCULAR - ▁INJUSTICE - ▁PINOCCHIO - ▁PRISCILLA - ▁THYSELF - ▁OCCURRENCE - ▁CASUAL - ▁FRANTIC - ▁LEGEND - ▁FERTIL - ▁BACKGROUND - ▁DELICACY - ▁ESTRALLA - ▁MANUSCRIPT - ▁RESPONSE - ▁UNIVERSITY - ▁WOLVES - ▁SCANDAL - ▁STUMBLE - ▁HOARSE - ▁BODILY - ▁CONVENT - ▁EXAMINING - ▁INCAPABLE - ▁PERCEIVING - ▁PHILADELPHIA - ▁SUBSEQUENT - ▁THIEVES - ▁ACCUMULAT - ▁DAMSEL - ▁SCOTCH - ▁UNDERNEATH - ▁NOBILITY - ▁SMASH - ▁REVOLT - ▁ENGAGE - ▁CATHEDRAL - ▁CHAMPION - ▁DESPATCH - ▁ETERNITY - ▁JANUARY - ▁PLEADED - ▁PROBABILITY - ▁JIMMIE - ▁PARALLEL - ▁FISHERMAN - ▁JERRY - ▁SWORE - ▁DRAUGHT - ▁OPPONENT - ▁PRIMITIVE - ▁SIGNIFICANCE - ▁SUBSTANTIAL - ▁AMAZED - ▁DUNBAR - ▁COMMEND - ▁CONTEMPLATE - ▁TESTIMONY - ▁IMPERIAL - ▁ADAPT - ▁JUICE - ▁CALAMIT - CULAR - ▁CHATEAU - ▁PHOENIX - ▁PRUDENT - ▁SOLUTION - ▁VILLEFORT - ▁REACTION - ▁RELAX - ▁YU - ▁PROHIBIT - ▁DISTRUST - ▁PLUNDER - ▁WELFARE - ▁NAVIGAT - ▁PARLOR - ▁LAZY - ▁DETACH - OMETER - ▁PRIV - ▁DISCOURAGE - ▁OBSTINATE - ▁REJOICING - ▁SERMON - ▁VEHICLE - ▁FANCIES - ▁ENLIGHTEN - ▁ACUTE - ▁ILLUSION - ▁ANTHEA - ▁MARTIAN - ▁EXCITE - ▁GENEROSITY - OLOGIST - ▁AMAZING - ▁UNWORTHY - ▁INTERNAL - ▁INCENSE - ▁VIBRAT - ▁ADHERE - ROACH - ▁FEBRUARY - ▁MEXICAN - ▁POTATOES - ▁INCESSANT - ▁INTERPOSED - ▁PARCEL - ▁VEXED - ▁PROMOTE - MIDST - ▁ARISTOCRAT - ▁CYRIL - ▁EMBARK - ▁ABUNDANCE - ▁LITERALLY - ▁SURGEON - ▁TERRACE - ▁ATLANTIC - ▁MARTYR - ▁SPECK - ▁SENATE - ▁LOAF - ▁ADMINISTER - ▁APPREHEND - ▁SUBDUED - ▁TEMPORARY - ▁DOMINION - ▁ELABORATE - ▁DIGNIFIED - ▁ELIZA - ▁SPLASH - ▁CONSEIL - ▁DEXTER - ▁UNSEEN - ▁TRAGIC - VOCATION - ▁GRATIFY - ▁BACHELOR - ▁DEFENSE - ▁EXCURSION - ▁FACULTIES - ▁PROPRIETOR - ▁SYMPATHETIC - ▁UNNECESSARY - ▁RADIANT - ▁VACANT - ▁OUNCE - ▁SCREW - ▁PHENOMENON - ▁PROMINENT - ▁WORRIED - ▁STUDIES - ▁CLIMATE - ▁KEITH - ▁ARAMIS - ▁BLISS - ▁CONTINUAL - ▁SURPASS - ▁HEBREW - ▁IDENTITY - ▁PROVOKE - ▁TEMPERAMENT - ▁CHARIOT - ▁HARBOR - ▁NINTH - ▁PRIOR - ▁DESIROUS - ▁JERUSALEM - ▁UNDERTAKING - ▁EDISON - ▁MIRTH - ▁SCOUT - ▁APPARATUS - ▁ILLUSTRATION - ▁INTELLIGIBLE - ▁INVARIABLY - ▁PIERCED - ▁REVIEW - ▁FLICKER - ▁HAZARD - ▁REVELATION - ▁DIXON - ▁EXCITING - ▁GOSPEL - ▁CONSTANCE - ▁OVERTAKE - ▁GUINEA - ▁ALADDIN - ▁CHICAGO - ▁TULLIVER - ▁HAMILTON - ▁GARRISON - ▁DISCIPLE - ▁INTENSITY - ▁TRAITOR - ▁CHANCELLOR - ▁PROVERB - ▁DAGGER - ▁FORESEE - ▁CONFIDE - ▁GLIMMER - ▁CHAUVELIN - ▁ILLUSTRATE - ▁VOLUNTEER - ▁JUNGLE - ▁STREAK - ▁SUNRISE - ▁DISSOLV - ▁QUEST - ▁AWHILE - ▁FELICITY - ▁LEGISLATURE - ▁LEONORA - ▁MAGAZINE - ▁PITIFUL - ▁COLONY - ▁SHAWL - ▁ARRIVING - ▁FUNDAMENTAL - ▁CARPENTER - ▁OVERFLOW - ▁EXPAND - ▁HARVEST - ▁FEMININE - ▁INNUMERABLE - ▁SCRAMBLE - ▁TWENTIETH - ▁TRIFLING - ▁GHASTL - ▁CONQUEST - ▁DANIEL - ▁FACILIT - ▁FORSAKE - ▁BEHAVIOUR - ▁GORGEOUS - ▁PRODUCING - ▁HAPPIER - ▁PROMISING - ▁RAINBOW - ▁INSTINCTIVELY - ▁DECREE - ▁EYEBROWS - ▁IRRESISTIBLE - ▁PHARAOH - ▁SCROOGE - ▁UNNATURAL - ▁CRUMBS - ▁REFINED - ▁DREARY - ▁TRENCH - ▁CONVINCE - ▁FRINGE - ▁EXTREMITY - ▁INTIMACY - ▁SCOUNDREL - ▁SUFFRAGE - ▁UNEASINESS - ▁BARRICADE - ▁CIRCULAT - ▁SAMUEL - ▁BRUCE - ▁DARCY - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram5000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 10 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 512 attention_heads: 8 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 3072 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 17 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.1 linear_units: 1024 positionwise_layer_type: linear macaron_ffn: true use_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 layer_drop_rate: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202211' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
joelniklaus/legal-german-roberta-base
joelniklaus
2023-01-07T03:24:28Z
7
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-27T22:21:19Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: legal-german-roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal-german-roberta-base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7080 - Accuracy: 0.8387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1024 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - training_steps: 1000000 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-------:|:--------:|:---------------:| | 2.1008 | 0.05 | 50000 | 0.6533 | 2.0523 | | 1.5248 | 0.1 | 100000 | 0.7661 | 1.1575 | | 1.3152 | 0.15 | 150000 | 0.7674 | 1.1281 | | 1.1239 | 0.2 | 200000 | 0.7971 | 0.9458 | | 0.9472 | 0.25 | 250000 | 0.7876 | 0.9979 | | 0.961 | 0.3 | 300000 | 0.8075 | 0.8798 | | 1.0179 | 0.35 | 350000 | 0.8018 | 0.9102 | | 1.037 | 0.4 | 400000 | 0.8195 | 0.8107 | | 1.1206 | 0.45 | 450000 | 0.8152 | 0.8323 | | 1.0865 | 0.5 | 500000 | 0.8242 | 0.7829 | | 0.9616 | 0.55 | 550000 | 0.8224 | 0.7895 | | 0.7727 | 0.6 | 600000 | 0.8285 | 0.7585 | | 0.9871 | 1.04 | 650000 | 0.8320 | 0.7391 | | 1.0679 | 1.09 | 700000 | 0.8311 | 0.7436 | | 0.9203 | 1.14 | 750000 | 0.8355 | 0.7187 | | 0.9626 | 1.19 | 800000 | 0.8353 | 0.7242 | | 0.7263 | 1.24 | 850000 | 0.7094 | 0.8378 | | 0.8578 | 1.29 | 900000 | 0.7140 | 0.8368 | | 0.7693 | 1.34 | 950000 | 0.7091 | 0.8377 | | 1.0488 | 1.39 | 1000000 | 0.7080 | 0.8387 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cu113 - Datasets 2.8.0 - Tokenizers 0.12.1
dacquaviva/bongodog-dog
dacquaviva
2023-01-07T02:55:39Z
30
4
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "animal", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-07T01:40:00Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - animal widget: - text: a photo of bongodog dog in the Acropolis --- # DreamBooth model for the bongodog concept trained by dacquaviva on the dacquaviva/bongodog dataset. This is a Stable Diffusion model fine-tuned on the bongodog concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of bongodog dog** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on my `dog` images for the animal theme. His name is Bongo :). ## Photo of my dog Bongo: <img src="https://drive.google.com/uc?export=view&id=1m5heLYYzQIxDeyNoxxtB6X7bwNhxqG9v" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1nP3JqAYEZSlTFAgduhhFC6S7XYKo8Nz9" alt="bongodog" width="200"/> ## Examples of generated images: <img src="https://drive.google.com/uc?export=view&id=1DaJUXJP2nQy0_TVQpf3QAaJd6rBfbOlo" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1ybeN5vg0OYuSalOenQX8AB8yaYBRW3O0" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1-HqsSQpuPIh8Y8C0kD92mPs_Rd68cPik" alt="bongodog" width="200"/> <img src="https://drive.google.com/uc?export=view&id=1JvUDQuTC0oaZiFPKQXUfUsHfhr8LLpCw" alt="bongodog" width="200"/> ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('dacquaviva/bongodog-dog') image = pipeline().images[0] image ```
Eppinette/Buttonmix
Eppinette
2023-01-07T02:47:17Z
0
2
null
[ "region:us" ]
null
2022-12-22T17:30:44Z
language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m --- ~~~This is a complicated mix, not recommended for beginners -activation words ; jennao , jenna ortega, wednesday, kuvshinov, m_robutts, m_wlop, wlop, m_guweiz, m_ouroboros, style, artstyle, illustration style --- ## Buttonmix / mixed model This model was made by mixing a large variety of models (jenna ortega, jennao, kuvshinov, robutts-any, wlop, wlop any, guweiz, ouroboros V3, diffmix) ## Recommended Settings & Usage Any Sampler Have fun :) ## Example Picture from buttonmix <table> <tr> <td><img src=https://i.imgur.com/Sg5nc51.png width=100% height=100%/></td> </tr> </table>
cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
cleanrl
2023-01-07T02:34:50Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Frostbite-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-07T02:34:46Z
--- tags: - Frostbite-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Frostbite-v5 type: Frostbite-v5 metrics: - type: mean_reward value: 270.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Frostbite-v5** This is a trained model of a PPO agent playing Frostbite-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]" python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Frostbite-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock poetry install --all-extras python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 1 ``` # Hyperparameters ```python {'anneal_lr': True, 'async_batch_size': 16, 'batch_size': 2048, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Frostbite-v5', 'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado', 'gae': True, 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1024, 'norm_adv': True, 'num_envs': 64, 'num_minibatches': 2, 'num_steps': 32, 'num_updates': 24414, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 2, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'envpool-atari'} ```
jefsnacker/lunar-surface
jefsnacker
2023-01-07T02:34:42Z
32
0
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-07T02:32:09Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### lunar-surface on Stable Diffusion via Dreambooth #### model by jefsnacker This your the Stable Diffusion model fine-tuned the lunar-surface concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **leva dune landscape** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_08.jpg) ![image 1](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_16.jpg) ![image 2](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_09.jpg) ![image 3](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_22.jpg) ![image 4](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_02.jpg) ![image 5](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_10.jpg) ![image 6](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_05.jpg) ![image 7](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_20.jpg) ![image 8](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_13.jpg) ![image 9](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_23.jpg) ![image 10](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_18.jpg) ![image 11](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_19.jpg) ![image 12](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_15.jpg) ![image 13](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_01.jpg) ![image 14](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_07.jpg) ![image 15](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_12.jpg) ![image 16](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_14.jpg) ![image 17](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_03.jpg) ![image 18](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_06.jpg) ![image 19](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_17.jpg) ![image 20](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_21.jpg) ![image 21](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_00.jpg) ![image 22](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_04.jpg) ![image 23](https://huggingface.co/jefsnacker/lunar-surface/resolve/main/concept_images/moon_11.jpg)
Eppinette/Tim_Burton_Stop-Motion
Eppinette
2023-01-07T02:09:02Z
0
4
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-12-16T00:13:41Z
--- license: creativeml-openrail-m --- Experimental Tim Burton style based on Frankenweenie and Corpse Bride Txt2img is... hit or miss, img2img using loopback at 0.3-0.4 denoise is the recommended use for this model Key words are : tim burton artstyle
Eppinette/Cyberware
Eppinette
2023-01-07T02:08:55Z
0
47
null
[ "stable-diffusion", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2022-10-20T15:59:29Z
--- language: - en tags: - stable-diffusion - text-to-image license: mit --- # Cyberware Conceptual Model / Dreambooth Training To use these models you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder For best results specify "mechanical 'body part or object'" or simply "mechanical parts" To increase the strength put "cyberware style" in () brackets To decrease the strength put "cyberware style" in [] brackets ## Cyberware V3 Use token word "m_cyberware" and class word "style in prompt to activate" Medium image set trained on AnythingV3 base model to 7,000 steps Example images <table> <tr> <td><img src=https://i.imgur.com/x1wisN9.png width=100% height=200%/></td> <td><img src=https://i.imgur.com/Gr6asRA.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/6Acz8aP.png width=100% height=100%/></td> </tr> </table> ## Cyberware V2 Use token word "m_cyberware" and class word "style in prompt to activate" Small image set trained on Trinart_character base model to 4,000 steps example images <table> <tr> <td><img src=https://i.imgur.com/A4G7I6x.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/YSAZr2y.png width=100% height=100%/></td> </tr> </table> ## Cyberware_V1 Use token word "Cyberware" and class word "style" in prompt to activate. Small image set trained on Waifu_Diffusion v1.3 base model to 6,000 steps Example images <table> <tr> <td><img src=https://i.imgur.com/qu7CmjG.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/mhHXG4n.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/BC3Lh8d.png width=100% height=100%/></td> </tr> </table> Have fun :)
Eppinette/soft_brush_model
Eppinette
2023-01-07T02:08:48Z
0
5
null
[ "stable-diffusion", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2022-11-10T07:16:46Z
--- language: - en tags: - stable-diffusion - text-to-image license: mit --- # Soft Brush Style Model / Dreambooth Training This model is trained entirely on a collection of similar style images from varying sources # Use To use this model you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder To use it in a prompt: ```"m_sb style"``` for highest strength or just "m_sb" To increase the strength put "m_sb style" in () brackets To decrease the strength put "m_sb style" in [] brackets Waifu_diffusion base trained model trained to 15,000 steps Have fun :) ## Txt2img Example Pictures from Soft_brush <table> <tr> <td><img src=https://i.imgur.com/7QmMnlN.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/ORD35Gt.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/HUhvSF6.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/NGud9La.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/wWBYJ2W.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/u8PlDbS.png width=100% height=100%/></td> </tr> </table> License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the model to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
Eppinette/Mona
Eppinette
2023-01-07T02:08:45Z
0
6
null
[ "stable-diffusion", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2022-10-20T07:59:26Z
--- language: - en tags: - stable-diffusion - text-to-image license: mit --- # Mona Subject Model / Dreambooth Training ## Usage To use this model you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder To use it in a prompt: ```"Mona woman"``` for highest strength or just "Mona" To increase the strength put "Mona woman" in () brackets To decrease the strength put "Mona woman" in [] brackets Waifu_diffusion base trained model trained to 4,000 steps Have fun :) ## Example Pictures from Mona_4k <table> <tr> <td><img src=https://i.imgur.com/acDDsQZ.png width=150% height=150%/></td> <td><img src=https://i.imgur.com/15PnKDf.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/PWxazM1.png width=150% height=150%/></td> </tr> </table>
Eppinette/Rebecca_Edgerunners
Eppinette
2023-01-07T02:08:40Z
0
3
null
[ "stable-diffusion", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2022-10-18T03:25:07Z
--- language: - en tags: - stable-diffusion - text-to-image license: mit --- # Rebecca Subject Model / Dreambooth Training ## Usage To use this model you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder To use it in a prompt: ```"Rebecca girl"``` for highest strength or just "Rebecca" To increase the strength put "Rebecca girl" in () brackets To decrease the strength put "Rebecca girl" in [] brackets Waifu_diffusion base trained model trained to 3,500 steps Have fun :) ## Example Pictures from Rebecca_3.5k <table> <tr> <td><img src=https://i.imgur.com/h9milQd.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/3Uxe6Bi.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/FHczkJj.png width=100% height=100%/></td> </tr> </table>
huggingtweets/ant_philosophy
huggingtweets
2023-01-07T00:37:29Z
106
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-07T00:37:22Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1536768271038521345/eW2H3P2X_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Ant Philosophy</div> <div style="text-align: center; font-size: 14px;">@ant_philosophy</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from The Ant Philosophy. | Data | The Ant Philosophy | | --- | --- | | Tweets downloaded | 575 | | Retweets | 52 | | Short tweets | 6 | | Tweets kept | 517 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rkacufh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ant_philosophy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23ou2jp3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23ou2jp3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ant_philosophy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/ant_philosophy-philosophy_dq-wise_chimp
huggingtweets
2023-01-07T00:27:58Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-01-07T00:25:31Z
--- language: en thumbnail: http://www.huggingtweets.com/ant_philosophy-philosophy_dq-wise_chimp/1673051273183/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1346208413596921864/fGYV6EpP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1452223617127915526/RW9skok-_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1536768271038521345/eW2H3P2X_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Wise Chimp & Philosophy Thoughts & The Ant Philosophy</div> <div style="text-align: center; font-size: 14px;">@ant_philosophy-philosophy_dq-wise_chimp</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Wise Chimp & Philosophy Thoughts & The Ant Philosophy. | Data | Wise Chimp | Philosophy Thoughts | The Ant Philosophy | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3232 | 575 | | Retweets | 40 | 24 | 52 | | Short tweets | 46 | 34 | 6 | | Tweets kept | 3164 | 3174 | 517 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ksfa3pd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ant_philosophy-philosophy_dq-wise_chimp's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3q1t0116) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3q1t0116/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ant_philosophy-philosophy_dq-wise_chimp') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AmenAllah/finetuned-ner-offer
AmenAllah
2023-01-07T00:27:17Z
10
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:data_set", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-07T00:10:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - data_set metrics: - precision - recall - f1 - accuracy model-index: - name: finetuned-ner-offer results: - task: name: Token Classification type: token-classification dataset: name: data_set type: data_set config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.2254335260115607 - name: Recall type: recall value: 0.24528301886792453 - name: F1 type: f1 value: 0.23493975903614459 - name: Accuracy type: accuracy value: 0.9209345919528688 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-ner-offer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the data_set dataset. It achieves the following results on the evaluation set: - Loss: 0.3242 - Precision: 0.2254 - Recall: 0.2453 - F1: 0.2349 - Accuracy: 0.9209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 100 | 0.3344 | 0.2162 | 0.2013 | 0.2085 | 0.9217 | | No log | 2.0 | 200 | 0.3167 | 0.1804 | 0.2201 | 0.1983 | 0.9204 | | No log | 3.0 | 300 | 0.3242 | 0.2254 | 0.2453 | 0.2349 | 0.9209 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
rmathur/ppo-LunarLander-v2
rmathur
2023-01-07T00:12:21Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-07T00:11:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 216.03 +/- 71.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AmenAllah/bert-finetuned-ner
AmenAllah
2023-01-07T00:07:38Z
10
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:data_set", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-12-24T19:19:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - data_set metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: data_set type: data_set config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.2080536912751678 - name: Recall type: recall value: 0.1949685534591195 - name: F1 type: f1 value: 0.20129870129870134 - name: Accuracy type: accuracy value: 0.9193947914574546 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the data_set dataset. It achieves the following results on the evaluation set: - Loss: 0.3395 - Precision: 0.2081 - Recall: 0.1950 - F1: 0.2013 - Accuracy: 0.9194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 100 | 0.3796 | 0.125 | 0.0755 | 0.0941 | 0.9152 | | No log | 2.0 | 200 | 0.3512 | 0.2131 | 0.1635 | 0.1851 | 0.9208 | | No log | 3.0 | 300 | 0.3395 | 0.2081 | 0.1950 | 0.2013 | 0.9194 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
theblackcat102/electra-large-reward-model
theblackcat102
2023-01-06T23:47:26Z
13
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "webgpt", "regression", "reward-model", "en", "dataset:openai/webgpt_comparisons", "dataset:openai/summarize_from_feedback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-01T07:07:55Z
--- language: - en tags: - webgpt - regression - reward-model license: apache-2.0 datasets: - openai/webgpt_comparisons - openai/summarize_from_feedback metrics: - accuracy --- Reward Model pretrained on openai/webgpt_comparison and humanfeedback summary. Unlike the other electra-large model this model is trained using rank loss with one more datasets. On validation dataset the result is much more stable than usual. You can refer to this [wandb](https://wandb.ai/theblackcat102/reward-model/runs/1d4e4oi2?workspace=) for more details Slightly better than previous webgpt only model : [electra-large](https://huggingface.co/theblackcat102/electra-large-webgpt-rm)
AgentXXX/Reinforce-Pixelcopter-PLE-v0
AgentXXX
2023-01-06T22:37:17Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T13:53:57Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.40 +/- 22.50 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
gballardin/Reinforce-Pixelcopter-PLE-v0
gballardin
2023-01-06T22:11:50Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T01:05:26Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.20 +/- 29.12 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Schwarzschild009/ppo-LunarLander-v2
Schwarzschild009
2023-01-06T22:08:39Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T21:50:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 266.49 +/- 15.74 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
cleanrl
2023-01-06T21:59:15Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Freeway-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T21:59:11Z
--- tags: - Freeway-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Freeway-v5 type: Freeway-v5 metrics: - type: mean_reward value: 33.70 +/- 0.46 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Freeway-v5** This is a trained model of a PPO agent playing Freeway-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]" python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Freeway-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock poetry install --all-extras python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1 ``` # Hyperparameters ```python {'anneal_lr': True, 'async_batch_size': 16, 'batch_size': 2048, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Freeway-v5', 'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado', 'gae': True, 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1024, 'norm_adv': True, 'num_envs': 64, 'num_minibatches': 2, 'num_steps': 32, 'num_updates': 24414, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 2, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'envpool-atari'} ```
bitcloud2/Reinforce-pixelcopter
bitcloud2
2023-01-06T21:48:52Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T09:15:25Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - metrics: - type: mean_reward value: 45.10 +/- 39.04 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
EduardoCGarridoMerchan/Taxi-v3-explore_slow
EduardoCGarridoMerchan
2023-01-06T21:15:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T21:15:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-explore_slow results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="EduardoCGarridoMerchan/Taxi-v3-explore_slow", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Danimp94/PPO-LunarLander-DM-2
Danimp94
2023-01-06T21:11:13Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T21:10:49Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 282.18 +/- 13.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EduardoCGarridoMerchan/Taxi-v3-explore
EduardoCGarridoMerchan
2023-01-06T20:56:53Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T20:56:43Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-explore results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.83 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="EduardoCGarridoMerchan/Taxi-v3-explore", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LarryAIDraw/bochi_anyf222Hiten03_BWM197531
LarryAIDraw
2023-01-06T20:41:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-06T19:45:57Z
--- license: creativeml-openrail-m ---
manuelblp/ppo-Huggy
manuelblp
2023-01-06T20:16:24Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-06T20:16:16Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: manuelblp/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ashutosh1919/dqn-SpaceInvadersNoFrameskip-v4
ashutosh1919
2023-01-06T20:05:14Z
4
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T20:04:35Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 734.50 +/- 284.12 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashutosh1919 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ashutosh1919 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ashutosh1919 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
camenduru/one-shot-talking-face
camenduru
2023-01-06T19:46:46Z
0
17
null
[ "arxiv:2112.02749", "region:us" ]
null
2023-01-06T18:48:08Z
# One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) #### [Paper](https://arxiv.org/pdf/2112.02749.pdf) | [Demo](https://www.youtube.com/watch?v=HHj-XCXXePY) #### Requirements - Python >= 3.6 , Pytorch >= 1.8 and ffmpeg - Set up [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace) - We use the OpenFace tools to extract the initial pose of the reference image - Make sure you have installed this tool, and set the `OPENFACE_POSE_EXTRACTOR_PATH` in `config.py`. For example, it should be the absolute path of the "`FeatureExtraction.exe`" for Windows. - Other requirements are listed in the 'requirements.txt' #### Pretrained Checkpoint Please download the pretrained checkpoint from [google-drive](https://drive.google.com/file/d/1mjFEozPR_2vMaVRMd9Agk_sU1VaiUYMl/view?usp=sharing) and unzip it to the directory (`/checkpoints`). Or manually modify the settings of `GENERATOR_CKPT` and `AUDIO2POSE_CKPT` in the `config.py`. #### Extract phoneme We employ the [CMU phoneset](https://github.com/cmusphinx/cmudict) to represent phonemes, the extra 'SIL' means silence. All the phonesets can be seen in '`phindex.json`'. We have extracted the phonemes for the audios in the '`sample/audio`' directory. For other audios, you can extract the phonemes by other ASR tools and then map them to the CMU phoneset. Or email to [email protected] for help. #### Generate Demo Results ``` python test_script.py --img_path xxx.jpg --audio_path xxx.wav --phoneme_path xxx.json --save_dir "YOUR_DIR" ``` Note that the input images must keep the same height and width and the face should be appropriately cropped as in `samples/imgs`. You can also preprocess your images with `image_preprocess.py`. #### License and Citation ``` @InProceedings{wang2021one, author = Suzhen Wang, Lincheng Li, Yu Ding, Xin Yu title = {One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning}, booktitle = {AAAI 2022}, year = {2022}, } ``` #### Acknowledgement This codebase is based on [First Order Motion Model](https://github.com/AliaksandrSiarohin/first-order-model) and [imaginaire](https://github.com/NVlabs/imaginaire), thanks for their contributions.
EduardoCGarridoMerchan/ppo-LunarLander-v3
EduardoCGarridoMerchan
2023-01-06T19:31:26Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T19:31:06Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.63 +/- 25.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EduardoCGarridoMerchan/Taxi-v3-second-taxi
EduardoCGarridoMerchan
2023-01-06T19:28:42Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T19:28:33Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-second-taxi results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="EduardoCGarridoMerchan/Taxi-v3-second-taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
nandysoham/distilbert-base-uncased-finetuned-squad
nandysoham
2023-01-06T18:26:34Z
73
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-12-23T03:14:32Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nandysoham/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4937 - Train End Logits Accuracy: 0.6109 - Train Start Logits Accuracy: 0.5719 - Validation Loss: 1.1728 - Validation End Logits Accuracy: 0.6787 - Validation Start Logits Accuracy: 0.6479 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.4937 | 0.6109 | 0.5719 | 1.1728 | 0.6787 | 0.6479 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
gneuert/swin-tiny-patch4-window7-224-finetuned-eurosat
gneuert
2023-01-06T18:17:21Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:cifar10", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-06T18:15:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cifar10 model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
mmontecino/q-FrozenLake-v1-4x4-noSlippery
mmontecino
2023-01-06T18:11:34Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T18:08:38Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mmontecino/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
gneuert/bert-emotion
gneuert
2023-01-06T18:04:07Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-06T17:24:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tweet_eval metrics: - precision - recall model-index: - name: bert-emotion results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: emotion split: train args: emotion metrics: - name: Precision type: precision value: 0.7221058163048105 - name: Recall type: recall value: 0.7241535542602306 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2559 - Precision: 0.7221 - Recall: 0.7242 - Fscore: 0.7223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.8588 | 1.0 | 815 | 0.8342 | 0.7807 | 0.6117 | 0.6364 | | 0.5394 | 2.0 | 1630 | 0.9126 | 0.7363 | 0.6923 | 0.7096 | | 0.2805 | 3.0 | 2445 | 1.2559 | 0.7221 | 0.7242 | 0.7223 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
codeSpaghetti/ppo-Huggy
codeSpaghetti
2023-01-06T17:54:29Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-01-06T17:54:22Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: codeSpaghetti/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Seif/dqn-SpaceInvadersNoFrameskip-v4
Seif
2023-01-06T17:50:11Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T17:49:35Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 529.50 +/- 95.62 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Seif -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Seif -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Seif ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
cleanrl/FishingDerby-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1
cleanrl
2023-01-06T17:17:48Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "FishingDerby-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T17:17:44Z
--- tags: - FishingDerby-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FishingDerby-v5 type: FishingDerby-v5 metrics: - type: mean_reward value: 37.60 +/- 9.68 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **FishingDerby-v5** This is a trained model of a PPO agent playing FishingDerby-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]" python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id FishingDerby-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock poetry install --all-extras python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 1 ``` # Hyperparameters ```python {'anneal_lr': True, 'async_batch_size': 16, 'batch_size': 2048, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'FishingDerby-v5', 'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado', 'gae': True, 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1024, 'norm_adv': True, 'num_envs': 64, 'num_minibatches': 2, 'num_steps': 32, 'num_updates': 24414, 'save_model': True, 'seed': 1, 'target_kl': None, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 2, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'envpool-atari'} ```
krob/lab9_model_bert
krob
2023-01-06T17:13:17Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-01-06T16:50:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: lab9_model_bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lab9_model_bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 50 | 3.6137 | | No log | 2.0 | 100 | 1.9421 | | No log | 3.0 | 150 | 1.2792 | | No log | 4.0 | 200 | 1.0015 | | No log | 5.0 | 250 | 0.8056 | | No log | 6.0 | 300 | 0.7624 | | No log | 7.0 | 350 | 0.6635 | | No log | 8.0 | 400 | 0.6740 | | No log | 9.0 | 450 | 0.6580 | | 1.4267 | 10.0 | 500 | 0.6498 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
iSchock92/Buch
iSchock92
2023-01-06T17:12:12Z
0
0
null
[ "de", "license:afl-3.0", "region:us" ]
null
2023-01-06T17:10:40Z
--- license: afl-3.0 language: - de ---
BobbyG97/German-MedBERT-Birads-NER100
BobbyG97
2023-01-06T16:42:07Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-06T16:28:02Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: German-MedBERT-Birads-NER results: [] widget: - text: "Beurteilung Mammographisch beidseits kein Anhalt für Malignität. Links ACR-Typ d. Rechts BIRADS 4. Links BIRADS 1." example_title: "NER Example" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # German-MedBERT-Birads-NER This model is a fine-tuned version of [smanjil/German-MedBERT](https://huggingface.co/smanjil/German-MedBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0617 - Precision: 0.5 - Recall: 0.6667 - F1: 0.5714 - Accuracy: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.0125 | 1.0 | 1.0 | 1.0 | 1.0 | | No log | 2.0 | 20 | 0.0718 | 0.5 | 0.6667 | 0.5714 | 0.9987 | | No log | 3.0 | 30 | 0.0617 | 0.5 | 0.6667 | 0.5714 | 0.9987 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
yizhangliu/Pixelcopter-PLE-v0
yizhangliu
2023-01-06T16:13:24Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T16:13:10Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 67.00 +/- 33.25 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
artificialguybr/ColoringBookSD
artificialguybr
2023-01-06T16:04:47Z
0
22
null
[ "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-10-22T16:35:23Z
--- license: creativeml-openrail-m tags: - text-to-image --- This is a trained model used Dreambooth with 35 images and 2500 steps. In the files you have the aesthetic embeddings. The model was trained using Stable Diffusion version 1.5. The model is not perfect, but delivers considerable results. The goal was to make a more simplistic model, since the images naturally generated by SD are more elaborate and full of unnecessary details. To use it you must download the model and place it in your Stable Diffusion. The model has 2GB. You must put in the prompt: ''in VARPJ1 Coloring Book Art Style'' to get the best result. I hope you like it! --- license: openrail ---
Loelia/wav2vec2-base-timit-demo-google-colab
Loelia
2023-01-06T15:53:36Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-04T08:29:47Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4414 - Wer: 0.3578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.095 | 1.0 | 500 | 0.4785 | 0.3873 | | 0.09 | 2.01 | 1000 | 0.5840 | 0.4203 | | 0.1059 | 3.01 | 1500 | 0.5674 | 0.4073 | | 0.0857 | 4.02 | 2000 | 0.5026 | 0.3797 | | 0.0719 | 5.02 | 2500 | 0.4981 | 0.3783 | | 0.0565 | 6.02 | 3000 | 0.4595 | 0.3721 | | 0.0463 | 7.03 | 3500 | 0.4578 | 0.3629 | | 0.0363 | 8.03 | 4000 | 0.4832 | 0.3669 | | 0.0444 | 9.04 | 4500 | 0.4414 | 0.3578 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
iricardoxd/optimum-gpt2
iricardoxd
2023-01-06T15:47:10Z
3
0
transformers
[ "transformers", "onnx", "gpt2", "text-generation", "exbert", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-01-06T15:36:14Z
--- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use the ONNX models of gpt2 to get the features of a given text: Example using transformers.pipelines: ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2") model = ORTModelForCausalLM.from_pretrained("gpt2", from_transformers=True) onnx_gen = pipeline("text-generation", model=model, tokenizer=tokenizer) text = "My name is Philipp and I live in Germany." gen = onnx_gen(text) ``` Example of text generation: ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2") model = ORTModelForCausalLM.from_pretrained("optimum/gpt2") inputs = tokenizer("My name is Arthur and I live in", return_tensors="pt") gen_tokens = model.generate(**inputs,do_sample=True,temperature=0.9, min_length=20,max_length=20) tokenizer.batch_decode(gen_tokens) ```
LinasKo/q-Taxi-v3
LinasKo
2023-01-06T15:26:25Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T15:26:21Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="LinasKo/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BiggieW/classification_chnsenticorp_aug
BiggieW
2023-01-06T15:24:55Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-06T09:31:31Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: classification_chnsenticorp_aug results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classification_chnsenticorp_aug This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3776 - Accuracy: 0.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4438 | 1.0 | 20 | 0.5145 | 0.75 | | 0.0666 | 2.0 | 40 | 0.4066 | 0.9 | | 0.0208 | 3.0 | 60 | 0.3776 | 0.85 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
MeetMeAt92/lolight
MeetMeAt92
2023-01-06T15:23:58Z
0
0
keras
[ "keras", "image-classification", "en", "arxiv:1910.09700", "region:us" ]
image-classification
2023-01-06T15:17:00Z
--- language: - en library_name: keras pipeline_tag: image-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed] # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
padmalcom/tts-hifigan-german
padmalcom
2023-01-06T15:17:13Z
116
2
speechbrain
[ "speechbrain", "Vocoder", "HiFIGAN", "text-to-speech", "TTS", "speech-synthesis", "de", "dataset:custom", "arxiv:2010.05646", "license:apache-2.0", "region:us" ]
text-to-speech
2022-11-04T07:43:44Z
--- language: "de" inference: false tags: - Vocoder - HiFIGAN - text-to-speech - TTS - speech-synthesis - speechbrain license: "apache-2.0" datasets: - custom --- # Vocoder with HiFIGAN trained on custom German dataset This repository provides all the necessary tools for using a [HiFIGAN](https://arxiv.org/abs/2010.05646) vocoder trained on a generated German dataset using [mp3_to_training_data](https://github.com/padmalcom/mp3_to_training_data). The pre-trained model (8 epochs so far) takes in input a spectrogram and produces a waveform in output. Typically, a vocoder is used after a TTS model that converts an input text into a spectrogram. ## How to use Install speechbrain. ```bash pip install speechbrain ``` Use a TTS model (e.g. [tts-tacotron-german](https://huggingface.co/padmalcom/tts-tacotron2-german)), generate a spectrogram and convert it to audio. ```python import torchaudio from speechbrain.pretrained import Tacotron2 from speechbrain.pretrained import HIFIGAN tacotron2 = Tacotron2.from_hparams(source="padmalcom/tts-tacotron2-german", savedir="tmpdir_tts") hifi_gan = HIFIGAN.from_hparams(source="padmalcom/tts-hifigan-german", savedir="tmpdir_vocoder") mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb") waveforms = hifi_gan.decode_batch(mel_output) torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
DanGalt/Reinforce-MLP-CartPole-v1
DanGalt
2023-01-06T15:12:55Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T15:12:43Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-MLP-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
OliP/Taxi-v3
OliP
2023-01-06T14:48:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T14:48:33Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="OliP/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Victarry/q-Taxi-v3
Victarry
2023-01-06T14:34:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T14:34:40Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Victarry/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BobMcDear/seresnet50
BobMcDear
2023-01-06T14:27:20Z
0
0
null
[ "region:us" ]
null
2023-01-06T14:24:17Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
BobMcDear/seresnet152d
BobMcDear
2023-01-06T14:27:03Z
0
0
null
[ "region:us" ]
null
2023-01-06T14:24:19Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
dbaibak/Reinforce-CartPole-v1
dbaibak
2023-01-06T14:14:50Z
0
1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T14:14:40Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhtasham/tiny-mlm-squad-custom-tokenizer
muhtasham
2023-01-06T14:08:41Z
107
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-06T13:58:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-squad-plain_text-custom-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-squad-plain_text-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.5181 | 0.4 | 500 | 7.5716 | | 6.4657 | 0.8 | 1000 | 7.5778 | | 6.2336 | 1.2 | 1500 | 7.4653 | | 6.0699 | 1.6 | 2000 | 7.4193 | | 5.946 | 2.0 | 2500 | 7.2908 | | 5.7981 | 2.4 | 3000 | 7.2710 | | 5.8332 | 2.8 | 3500 | 7.3876 | | 5.772 | 3.2 | 4000 | 7.3050 | | 5.6513 | 3.6 | 4500 | 7.3247 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
kokuma/mazapan
kokuma
2023-01-06T13:34:50Z
33
18
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "food", "dataset:kokuma/figuritas-de-mazapan", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-06T12:00:41Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - food widget: - text: a cute bunny, mazapan example_title: "Bunny" - text: a cute robot made of mazapan example_title: "Robot" - text: a photograph of a cute dog, mazapan example_title: "Dog" datasets: - kokuma/figuritas-de-mazapan --- # DreamBooth model for the `mazapan` concept trained by kokuma on the `kokuma/figuritas-de-mazapan` dataset. This is a Stable Diffusion model fine-tuned on the `mazapan` concept with DreamBooth for the food theme.\ This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! #### Prompts - **a cute X, mazapan**: `a cute bunny, mazapan` - **a cute X made of mazapan**: `a cute robot made of mazapan` - **a photograph of a cute X, mazapan**: `a photograph of a cute dog, mazapan` #### Suggested parameters - **CFG scale**: Between 6 and 8 - **Samplers**: Euler a, Euler, DPM2 a, DPM++ SDE, DPM fast, DPM adaptive, DPM2 a Karras ## Examples | a cute dog, mazapan | a cute sparrow, mazapan | a cute bear, mazapan | | -- | -- | -- | | ![](images/00012-3020517259-a-cute-dog,-mazapan.png) | ![](images/00015-2412980111-a-cute-sparrow,-mazapan.png) | ![](images/00020-4193097991-a-cute-bear,-mazapan.png) | | a cute koala, mazapan | a cute robot made of mazapan | a cute fox, mazapan | | -- | -- | -- | | ![](images/00021-3687677306-a-cute-koala,-mazapan.png) | ![](images/00081-1016725166-a-cute-robot-made-of-mazapan.png) | ![](images/00151-901443973-a-cute-fox,-mazapan.png) | ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('kokuma/mazapan') image = pipeline().images[0] image ```
NathanaelM/Reinforce-Cartpole-v1
NathanaelM
2023-01-06T13:25:30Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T13:24:52Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CinthiaS/xlm-roberta-base-finetuned-panx-pt
CinthiaS
2023-01-06T13:09:07Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-06T11:31:44Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-pt results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.pt split: train args: PAN-X.pt metrics: - name: F1 type: f1 value: 0.8500983209801846 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-pt This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2774 - F1: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6851 | 1.0 | 209 | 0.3271 | 0.8061 | | 0.2738 | 2.0 | 418 | 0.2774 | 0.8501 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
rishipatel92/Copter_v102
rishipatel92
2023-01-06T13:05:29Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T09:47:27Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Copter_v102 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 45.10 +/- 27.16 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
muhtasham/tiny-mlm-glue-stsb-custom-tokenizer
muhtasham
2023-01-06T12:54:54Z
107
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-06T12:49:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-glue-stsb-custom-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-stsb-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.1017 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.0065 | 0.7 | 500 | 7.1630 | | 6.9741 | 1.39 | 1000 | 7.2582 | | 6.8436 | 2.09 | 1500 | 7.0893 | | 6.6443 | 2.78 | 2000 | 7.1783 | | 6.7138 | 3.48 | 2500 | 7.0927 | | 6.5978 | 4.17 | 3000 | 7.1017 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-rte-custom-tokenizer
muhtasham
2023-01-06T12:43:32Z
107
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-06T12:39:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-glue-rte-custom-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-rte-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3646 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.71 | 1.6 | 500 | 7.1503 | | 6.8618 | 3.21 | 1000 | 7.2787 | | 6.816 | 4.81 | 1500 | 7.2543 | | 6.7094 | 6.41 | 2000 | 7.3646 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
chist/Reinforce-cartpole
chist
2023-01-06T12:41:50Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T12:41:39Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Eip/autotrain-real-vs-fake-news-2757281771
Eip
2023-01-06T12:23:34Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain", "unk", "dataset:Eip/autotrain-data-real-vs-fake-news", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-06T12:22:05Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Eip/autotrain-data-real-vs-fake-news co2_eq_emissions: emissions: 2.3993982522584325 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2757281771 - CO2 Emissions (in grams): 2.3994 ## Validation Metrics - Loss: 0.002 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Eip/autotrain-real-vs-fake-news-2757281771 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281771", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281771", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Eip/autotrain-real-vs-fake-news-2757281769
Eip
2023-01-06T12:22:45Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain", "unk", "dataset:Eip/autotrain-data-real-vs-fake-news", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-06T12:21:58Z
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Eip/autotrain-data-real-vs-fake-news co2_eq_emissions: emissions: 1.5370516791351636 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 2757281769 - CO2 Emissions (in grams): 1.5371 ## Validation Metrics - Loss: 0.002 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Eip/autotrain-real-vs-fake-news-2757281769 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281769", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281769", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
alphahg/koelectra-90435398
alphahg
2023-01-06T12:16:57Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:custom_squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-01-06T11:03:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - custom_squad_v2 model-index: - name: koelectra-90435398 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # koelectra-90435398 This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.7180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.94 | 10 | 1.8954 | | No log | 1.94 | 20 | 1.7180 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
CanadaKasper/dqn-SpaceInvadersNoFrameskip-v4
CanadaKasper
2023-01-06T12:07:04Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-02T07:51:13Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 655.50 +/- 187.24 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga CanadaKasper -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga CanadaKasper -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga CanadaKasper ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Ryukijano/ppo-LunarLander-v2
Ryukijano
2023-01-06T12:05:56Z
1
1
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T12:05:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 231.31 +/- 14.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LarryAIDraw/any_wlop1_02
LarryAIDraw
2023-01-06T11:48:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-06T10:43:28Z
--- license: creativeml-openrail-m ---
nonmetal/gslm-japanese
nonmetal
2023-01-06T11:37:08Z
0
3
null
[ "ja", "region:us" ]
null
2022-12-26T05:10:22Z
--- language: - ja --- # Japanese GSLM This is an Japanese implementation of [Generative Spoken Language Model](https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/gslm) to support textless NLP in Japanese. </br> Submitted to Acoustical Society of Japan, 2023 Spring. </br> ## How to use - PyTorch version >= 1.10.0 - Python version >= 3.8 ### Install requirements It is pre-required to install the [fairseq](https://github.com/facebookresearch/fairseq/) library and all the requirements the library needs. ``` git clone https://github.com/pytorch/fairseq cd fairseq pip install --editable ./ pip install librosa, unidecode, inflect ``` ## Re-synthesis of voice signal ### speech2unit The procedure for speech2unit is the same as the gslm example in [fairseq](https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/gslm/speech2unit). You can convert the Japanese voice signal to discrete unit through this [pre-trained quantization model](https://huggingface.co/nonmetal/gslm-japanese/resolve/main/hubert200_JPN.bin). Route the downloaded model to ```KM_MODEL_PATH```. This file replaces the ```HuBERT Base + KM200``` model provided by fariseq, so it is required to download ```HuBERT-Base``` model as a pretrained acoustic model. ``` TYPE='hubert' CKPT_PATH=<path_of_pretrained_acoustic_model> LAYER=6 KM_MODEL_PATH=<output_path_of_the_kmeans_model> MANIFEST=<tab_separated_manifest_of_audio_files_to_quantize> OUT_QUANTIZED_FILE=<output_quantized_audio_file_path> python examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py \ --feature_type $TYPE \ --kmeans_model_path $KM_MODEL_PATH \ --acoustic_model_path $CKPT_PATH \ --layer $LAYER \ --manifest_path $MANIFEST \ --out_quantized_file_path $OUT_QUANTIZED_FILE \ --extension ".wav" ``` ### unit2speech unit2speech model is modified Tacotron2 model that learns to synthesize speech from discrete speech units. You can convert the discrete unit to synthesized voice through this [model](https://huggingface.co/nonmetal/gslm-japanese/resolve/main/checkpoint_125k.pt). Also, it is required to download [Waveglow checkpoint](https://dl.fbaipublicfiles.com/textless_nlp/gslm/waveglow_256channels_new.pt) for Vocoder. Conversion from unit to speech is available with ```unit2speech_ja.py``` from this repository. It is also required to use ```hparam.py``` for extended compatability. ``` TTS_MODEL_PATH=<unit2speech_model_file_path> OUT_DIR=<dir_to_dump_synthesized_audio_files> WAVEGLOW_PATH=<path_where_you_have_downloaded_waveglow_checkpoint> python unit2speech_ja.py \ --tts_model_path $TTS_MODEL_PATH \ --out_audio_dir $OUT_DIR \ --waveglow_path $WAVEGLOW_PATH \ ``` ## References - Lakhotia, Kushal et al. On Generative Spoken Language Modeling from Raw Audio. Transactions of the Association for Computational Linguistics, 9:1336–1354, 2021. - Ott, Myle et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, 2019.
LarryAIDraw/bochi_any_out91011maxother0
LarryAIDraw
2023-01-06T11:32:56Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-01-06T10:20:22Z
--- license: creativeml-openrail-m ---
DeepBird/my-distilBERT-finetune-ner
DeepBird
2023-01-06T11:27:33Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-06T08:36:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 model-index: - name: my-distilBERT-finetune-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9376994122586062 - name: Recall type: recall value: 0.9397509256142713 - name: F1 type: f1 value: 0.9387240480793477 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-distilBERT-finetune-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0563 - Precision: 0.9377 - Recall: 0.9398 - F1: 0.9387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 439 | 0.0547 | 0.9251 | 0.9291 | 0.9271 | | 0.1451 | 2.0 | 878 | 0.0531 | 0.9315 | 0.9386 | 0.9350 | | 0.0326 | 3.0 | 1317 | 0.0563 | 0.9377 | 0.9398 | 0.9387 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
muhtasham/tiny-mlm-glue-mnli-custom-tokenizer
muhtasham
2023-01-06T11:26:49Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-06T10:33:07Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-glue-mnli-custom-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mnli-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 7.8162 | 0.4 | 500 | 7.1032 | | 6.9567 | 0.8 | 1000 | 7.0697 | | 6.8563 | 1.2 | 1500 | 7.0460 | | 6.7685 | 1.6 | 2000 | 7.0131 | | 6.6897 | 2.0 | 2500 | 6.9769 | | 6.5455 | 2.4 | 3000 | 6.9249 | | 6.482 | 2.8 | 3500 | 6.8552 | | 6.4153 | 3.2 | 4000 | 6.8445 | | 6.38 | 3.6 | 4500 | 6.7803 | | 6.4066 | 4.0 | 5000 | 6.8070 | | 6.2854 | 4.4 | 5500 | 6.7329 | | 6.2966 | 4.8 | 6000 | 6.7094 | | 6.1244 | 5.2 | 6500 | 6.6476 | | 6.1276 | 5.6 | 7000 | 6.6118 | | 6.0685 | 6.0 | 7500 | 6.5714 | | 5.98 | 6.4 | 8000 | 6.5522 | | 6.0174 | 6.8 | 8500 | 6.5093 | | 5.9451 | 7.2 | 9000 | 6.4866 | | 5.9681 | 7.6 | 9500 | 6.5238 | | 5.9246 | 8.0 | 10000 | 6.5340 | | 5.9219 | 8.4 | 10500 | 6.4727 | | 5.8812 | 8.8 | 11000 | 6.4483 | | 5.7815 | 9.2 | 11500 | 6.4402 | | 5.7938 | 9.6 | 12000 | 6.4124 | | 5.7934 | 10.0 | 12500 | 6.3908 | | 5.7332 | 10.4 | 13000 | 6.3861 | | 5.7628 | 10.8 | 13500 | 6.3638 | | 5.7259 | 11.2 | 14000 | 6.3345 | | 5.7505 | 11.6 | 14500 | 6.3117 | | 5.6441 | 12.0 | 15000 | 6.3118 | | 5.7058 | 12.4 | 15500 | 6.3116 | | 5.6017 | 12.8 | 16000 | 6.2728 | | 5.6424 | 13.2 | 16500 | 6.2790 | | 5.5799 | 13.6 | 17000 | 6.3034 | | 5.5625 | 14.0 | 17500 | 6.2580 | | 5.6015 | 14.4 | 18000 | 6.2607 | | 5.4884 | 14.8 | 18500 | 6.2535 | | 5.5117 | 15.2 | 19000 | 6.1960 | | 5.4919 | 15.6 | 19500 | 6.1907 | | 5.4624 | 16.0 | 20000 | 6.1838 | | 5.4721 | 16.4 | 20500 | 6.1461 | | 5.4833 | 16.8 | 21000 | 6.1251 | | 5.4404 | 17.2 | 21500 | 6.1725 | | 5.4487 | 17.6 | 22000 | 6.1417 | | 5.4499 | 18.0 | 22500 | 6.1721 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl
lambdaofgod
2023-01-06T11:22:36Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-06T11:22:31Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(53559, 200) ) (1): WordWeights( (emb_layer): Embedding(53559, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl
lambdaofgod
2023-01-06T11:21:57Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-06T11:21:52Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(53559, 200) ) (1): WordWeights( (emb_layer): Embedding(53559, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl
lambdaofgod
2023-01-06T11:21:41Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-06T11:21:36Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(4395, 200) ) (1): WordWeights( (emb_layer): Embedding(4395, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lambdaofgod/query-readme-nbow-nbow-mnrl
lambdaofgod
2023-01-06T11:20:14Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-06T11:20:09Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/query-readme-nbow-nbow-mnrl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/query-readme-nbow-nbow-mnrl') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query-readme-nbow-nbow-mnrl) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(4395, 200) ) (1): WordWeights( (emb_layer): Embedding(4395, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
lambdaofgod/document-dependencies-nbow-nbow-mnrl
lambdaofgod
2023-01-06T11:19:51Z
0
0
sentence-transformers
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-01-06T11:19:46Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # lambdaofgod/document-dependencies-nbow-nbow-mnrl This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('lambdaofgod/document-dependencies-nbow-nbow-mnrl') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-dependencies-nbow-nbow-mnrl) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(53559, 200) ) (1): WordWeights( (emb_layer): Embedding(53559, 1) ) (2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
PaddlePaddle/plato-mini
PaddlePaddle
2023-01-06T10:37:33Z
2
6
paddlenlp
[ "paddlenlp", "paddlepaddle", "conversational", "zh", "arxiv:1910.07931", "license:apache-2.0", "region:us" ]
null
2022-11-22T07:49:08Z
--- license: apache-2.0 language: - zh library_name: paddlenlp tags: - conversational --- [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/plato-mini ## Introduction Pre-training models have been proved effective for a wide range of natural language processing tasks. Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations, including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation. We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation. Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network. Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework. More detail: https://arxiv.org/abs/1910.07931 ## Available Models - **plato-mini**, *6 layer, 12 heads, 768 hidden size* ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable}, author = {Bao, Siqi and He, Huang and Wang, Fan and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:1910.07931}, year = {2019}, } ```
muhtasham/tiny-mlm-glue-cola-custom-tokenizer
muhtasham
2023-01-06T10:31:48Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-01-06T10:19:13Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: tiny-mlm-glue-cola-custom-tokenizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-cola-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.2575 | 0.47 | 500 | 6.4792 | | 6.4145 | 0.94 | 1000 | 6.4699 | | 6.2252 | 1.4 | 1500 | 6.5489 | | 6.0413 | 1.87 | 2000 | 6.3427 | | 5.8394 | 2.34 | 2500 | 6.2134 | | 5.825 | 2.81 | 3000 | nan | | 5.8071 | 3.27 | 3500 | 6.1627 | | 5.6601 | 3.74 | 4000 | 6.0835 | | 5.686 | 4.21 | 4500 | 6.0319 | | 5.6029 | 4.68 | 5000 | 5.9500 | | 5.5236 | 5.14 | 5500 | 5.9621 | | 5.586 | 5.61 | 6000 | 5.8955 | | 5.5582 | 6.08 | 6500 | 6.0435 | | 5.412 | 6.55 | 7000 | 6.0175 | | 5.397 | 7.02 | 7500 | nan | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
PaddlePaddle/unimo-text-1.0-summary
PaddlePaddle
2023-01-06T10:30:26Z
12
3
paddlenlp
[ "paddlenlp", "paddlepaddle", "unimo", "summarization", "zh", "arxiv:2012.15409", "license:apache-2.0", "region:us" ]
summarization
2022-12-07T09:42:22Z
--- library_name: paddlenlp license: apache-2.0 tags: - summarization language: - zh --- [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/unimo-text-1.0-summary ## Introduction Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. They can only utilize single-modal data (i.e. text or image) or limited multi-modal data (i.e. image-text pairs). In this work, we propose a unified-modal pre-training architecture, namely UNIMO, which can effectively adapt to both single-modal and multi-modal understanding and generation tasks. Large scale of free text corpus and image collections can be utilized to improve the capability of visual and textual understanding, and cross-modal contrastive learning (CMCL) is leveraged to align the textual and visual information into a unified semantic space over a corpus of image-text pairs. As the non-paired single-modal data is very rich, our model can utilize much larger scale of data to learn more generalizable representations. Moreover, the textual knowledge and visual knowledge can enhance each other in the unified semantic space. The experimental results show that UNIMO significantly improves the performance of several single-modal and multi-modal downstream tasks. More detail: https://arxiv.org/abs/2012.15409 ## Available Models - **unimo-text-1.0**, *12 layer, 12 heads, 768 hidden size, pretrained model* - **unimo-text-1.0-large**, *24 layer, 16 heads, 1024 hidden size, pretrained model* - **unimo-text-1.0-lcsts-new**, *12 layer, 12 heads, 768 hidden size, finetuned on the lcsts-new Chinese summarization dataset* - **unimo-text-1.0-summary**, *12 layer, 12 heads, 768 hidden size, finetuned on several in-house Chinese summarization datasets* ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning}, author = {Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2012.15409}, year = {2020}, } ```
PaddlePaddle/unimo-text-1.0-lcsts-new
PaddlePaddle
2023-01-06T10:29:44Z
0
0
paddlenlp
[ "paddlenlp", "paddlepaddle", "unimo", "summarization", "zh", "arxiv:2012.15409", "license:apache-2.0", "region:us" ]
summarization
2023-01-06T10:12:43Z
--- library_name: paddlenlp license: apache-2.0 tags: - summarization language: - zh --- [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/unimo-text-1.0-lcsts-new ## Introduction Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. They can only utilize single-modal data (i.e. text or image) or limited multi-modal data (i.e. image-text pairs). In this work, we propose a unified-modal pre-training architecture, namely UNIMO, which can effectively adapt to both single-modal and multi-modal understanding and generation tasks. Large scale of free text corpus and image collections can be utilized to improve the capability of visual and textual understanding, and cross-modal contrastive learning (CMCL) is leveraged to align the textual and visual information into a unified semantic space over a corpus of image-text pairs. As the non-paired single-modal data is very rich, our model can utilize much larger scale of data to learn more generalizable representations. Moreover, the textual knowledge and visual knowledge can enhance each other in the unified semantic space. The experimental results show that UNIMO significantly improves the performance of several single-modal and multi-modal downstream tasks. More detail: https://arxiv.org/abs/2012.15409 ## Available Models - **unimo-text-1.0**, *12 layer, 12 heads, 768 hidden size, pretrained model* - **unimo-text-1.0-large**, *24 layer, 16 heads, 1024 hidden size, pretrained model* - **unimo-text-1.0-lcsts-new**, *12 layer, 12 heads, 768 hidden size, finetuned on the lcsts-new Chinese summarization dataset* - **unimo-text-1.0-summary**, *12 layer, 12 heads, 768 hidden size, finetuned on several in-house Chinese summarization datasets* ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning}, author = {Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2012.15409}, year = {2020}, } ```
PaddlePaddle/unimo-text-1.0
PaddlePaddle
2023-01-06T10:28:51Z
0
0
paddlenlp
[ "paddlenlp", "paddlepaddle", "unimo", "summarization", "zh", "arxiv:2012.15409", "license:apache-2.0", "region:us" ]
summarization
2023-01-06T10:10:35Z
--- library_name: paddlenlp license: apache-2.0 tags: - summarization language: - zh --- [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/unimo-text-1.0 ## Introduction Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. They can only utilize single-modal data (i.e. text or image) or limited multi-modal data (i.e. image-text pairs). In this work, we propose a unified-modal pre-training architecture, namely UNIMO, which can effectively adapt to both single-modal and multi-modal understanding and generation tasks. Large scale of free text corpus and image collections can be utilized to improve the capability of visual and textual understanding, and cross-modal contrastive learning (CMCL) is leveraged to align the textual and visual information into a unified semantic space over a corpus of image-text pairs. As the non-paired single-modal data is very rich, our model can utilize much larger scale of data to learn more generalizable representations. Moreover, the textual knowledge and visual knowledge can enhance each other in the unified semantic space. The experimental results show that UNIMO significantly improves the performance of several single-modal and multi-modal downstream tasks. More detail: https://arxiv.org/abs/2012.15409 ## Available Models - **unimo-text-1.0**, *12 layer, 12 heads, 768 hidden size, pretrained model* - **unimo-text-1.0-large**, *24 layer, 16 heads, 1024 hidden size, pretrained model* - **unimo-text-1.0-lcsts-new**, *12 layer, 12 heads, 768 hidden size, finetuned on the lcsts-new Chinese summarization dataset* - **unimo-text-1.0-summary**, *12 layer, 12 heads, 768 hidden size, finetuned on several in-house Chinese summarization datasets* ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning}, author = {Li, Wei and Gao, Can and Niu, Guocheng and Xiao, Xinyan and Liu, Hao and Liu, Jiachen and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2012.15409}, year = {2020}, } ```
zlgao/swin-tiny-patch4-window7-224-finetuned-fluro_cls
zlgao
2023-01-06T10:26:13Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-01-06T10:12:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-fluro_cls results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-fluro_cls This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 0.7112 | 0.5238 | | No log | 1.67 | 2 | 0.5591 | 0.8571 | | 0.811 | 2.67 | 3 | 0.3781 | 0.9524 | | 0.811 | 3.67 | 4 | 0.1995 | 1.0 | | 0.811 | 4.67 | 5 | 0.1215 | 1.0 | | 0.3531 | 5.67 | 6 | 0.0578 | 1.0 | | 0.3531 | 6.67 | 7 | 0.0195 | 1.0 | | 0.3531 | 7.67 | 8 | 0.0072 | 1.0 | | 0.0618 | 8.67 | 9 | 0.0030 | 1.0 | | 0.0618 | 9.67 | 10 | 0.0012 | 1.0 | | 0.0618 | 10.67 | 11 | 0.0005 | 1.0 | | 0.0079 | 11.67 | 12 | 0.0003 | 1.0 | | 0.0079 | 12.67 | 13 | 0.0001 | 1.0 | | 0.0079 | 13.67 | 14 | 0.0001 | 1.0 | | 0.0051 | 14.67 | 15 | 0.0001 | 1.0 | | 0.0051 | 15.67 | 16 | 0.0000 | 1.0 | | 0.0051 | 16.67 | 17 | 0.0000 | 1.0 | | 0.0017 | 17.67 | 18 | 0.0000 | 1.0 | | 0.0017 | 18.67 | 19 | 0.0000 | 1.0 | | 0.0017 | 19.67 | 20 | 0.0000 | 1.0 | | 0.0004 | 20.67 | 21 | 0.0000 | 1.0 | | 0.0004 | 21.67 | 22 | 0.0000 | 1.0 | | 0.0004 | 22.67 | 23 | 0.0000 | 1.0 | | 0.0022 | 23.67 | 24 | 0.0000 | 1.0 | | 0.0022 | 24.67 | 25 | 0.0000 | 1.0 | | 0.0022 | 25.67 | 26 | 0.0000 | 1.0 | | 0.001 | 26.67 | 27 | 0.0000 | 1.0 | | 0.001 | 27.67 | 28 | 0.0000 | 1.0 | | 0.001 | 28.67 | 29 | 0.0000 | 1.0 | | 0.0013 | 29.67 | 30 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.10.2+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
dietercoppens/ppo-LunarLander-v2
dietercoppens
2023-01-06T10:26:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T10:25:34Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.43 +/- 24.85 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
yizhangliu/Reinforce-CartPole-v1
yizhangliu
2023-01-06T10:13:42Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T10:13:33Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Lucky1/q-FrozenLake-v1-4x4-noSlippery
Lucky1
2023-01-06T09:52:25Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-06T09:44:31Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Lucky1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Buseak/model_7012023
Buseak
2023-01-06T09:22:33Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-01-06T07:27:15Z
--- license: mit tags: - generated_from_trainer model-index: - name: model_7012023 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_7012023 This model is a fine-tuned version of [Buseak/my_pos_model](https://huggingface.co/Buseak/my_pos_model) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 244 | 0.3302 | 0.8829 | 0.8757 | 0.8793 | 0.9140 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Tokenizers 0.13.2
slplab/wav2vec2-xlsr-korean-aihub-900
slplab
2023-01-06T09:13:21Z
106
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "ko", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-18T15:49:36Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xlsr-korean-aihub-900 results: [] language: - ko --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-korean-aihub-900 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2659 - Wer: 0.1089 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 700 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | PER | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.0213 | 3.51 | 200 | 2.5580 | 0.9872 | | 2.4456 | 7.02 | 400 | 2.4060 | 0.9872 | | 2.2322 | 10.53 | 600 | 2.0604 | 0.9872 | | 1.9241 | 14.04 | 800 | 1.1675 | 0.5440 | | 0.8402 | 17.54 | 1000 | 0.4376 | 0.2035 | | 0.4796 | 21.05 | 1200 | 0.3261 | 0.1540 | | 0.339 | 24.56 | 1400 | 0.2918 | 0.1375 | | 0.2688 | 28.07 | 1600 | 0.2872 | 0.1310 | | 0.2338 | 31.58 | 1800 | 0.2732 | 0.1218 | | 0.212 | 35.09 | 2000 | 0.2654 | 0.1166 | | 0.1939 | 38.6 | 2200 | 0.2658 | 0.1087 | | 0.1738 | 42.11 | 2400 | 0.2692 | 0.1094 | | 0.1601 | 45.61 | 2600 | 0.2666 | 0.1107 | | 0.1587 | 49.12 | 2800 | 0.2659 | 0.1089 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Musha-the-Yusha/Reinforce-Pixel_Copter
Musha-the-Yusha
2023-01-06T09:01:23Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-05T16:05:17Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixel_Copter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 58.70 +/- 46.67 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction